better yet, listen to a 1h meeting and compare notes/action points
If you want to ineffectivly filter out most candidates just auto-reject everything that doesn’t arrive on a timestamp ending in 1.
There's a bizarro version of this guy who rejects people who do it in their head because they weren't told to not use an interpreter and he values them using the tools available to solve a problem. In his mind, the = is definitely part of the code, you should have double checked.
That does change it. In that I can see how false negatives may arise. Though, when hiring you generally care a lot more about false positives than negatives.
Really, the better test would be to not discriminate on it before you know it's useful, but store their answer to compare later.
This isn't a good methodology. To do your validation correctly, you'd want to hire some percentage of candidates who get it wrong and see what happens.
Your way, you're validating whether the test is informative as to passing rate in the next stage of your hiring process, not whether it's informative as to performance on the job.
(Related: the 'stage' model of hiring is a bad idea.)
result = result + x
// 5 0 5I'd love to know what logic path you followed to get 5 though!
Instead of creating a test that specifically aims for those bullet points, many technical assessments end up with convoluted scaffolding when actually, only those key bullet points really matter.
Like the OP, I can usually tell if a candidate has the technical chops in just a handful of really straightforward questions for a number of technical domains.
But along that thought, I’ve always held that a human conversation is the best filter. I’ll ask you what do you work on recently, what did you learn, what did you mess up, what did you hate about the tool / language / framework.
I strongly believe your ability to articulate your situation corresponds with your ability to do the job.
Can you re-frame your process or the prompt to only elicit those specific responses?
So instead of a whole exercise of building a React app or a whole backend API, for example, what would really "wow" you if you saw the candidate do it in the submission for a project? Could you re-frame your whole process so you only target those specific responses and elicit specific outputs?
Now that you've taken what was previously a 2 hour coding exercise (for example) and distilled down to 3-4 key questions, you can seek the same outputs in 15-30 minutes instead.
There are several advantages to this:
1) Many times, candidates know the answer but they actually can't figure out what you're looking for when there's a lot of cruft around the question/problem being solved. You can end up losing candidates that know how to solve the problem the way you want, but because of the way the question was posed, the objective is opaque.
2) It saves a lot of time for both sides. Interviewer doesn't have to review a big submission, candidate doesn't have to waste time doing a long project.
3) By condensing the cycle, you can evaluate more candidates and you can hopefully select a top candidate before they get another opportunity. You shorten the cycle time because the candidate doesn't have to find free time to do a project or sit down for long interviews, you don't need to have people review the code submissions, etc.
One of the absolute hardest part of my business is really hiring qualified candidates, and it's really demoralizing and time consuming and unbelievably expensive. The best that I've managed to do is the same that pretty much every other business owner says... which is that I can usually (not always) filter out the bad candidates (along with some false negatives), and have some degree of luck in hiring good candidates (with some false positives).
So this is not really foolproof, and also makes me think that feeding screenshots to AI is probably better than copy-pasting
There was story like this on NPR recently where a professor used this method to weed out students who were using AI to answer an essay question about a book the class was assigned to read. The book mentioned nothing about Marxism, but the prof inserted unseeable text into the question such that when it was copy&pasted into an AI chat it added an extra instruction to make sure to talk about Marxism in relation to this book (which wasn't at all related to Marxism). When he got answers that talked extensively about the book in Marxist terms he knew that they had used AI.
Maybe the question could be flipped on its head to filter further with "50% of applicants get this question wrong -- why?" to where someone more inquisitive like you might inspect it, but that's probably more of a frontend question.
That should avoid messing things up for people with screen readers while still trapping the copy+pasters.
I've had clipboard events and the clipboard API disabled in my browser to prevent websites from intercepting them for ages. I can't be the only one.
This is completely 100% inaccurate. I am using the latest version of NVDA, the primary open source screen reader for Windows.
Also, and this should go without saying, but PLEASE don't say these things if you don't actually know?
If someone is visually impaired, it's short enough you can just read the problem text to them.
I'm pretty sure the intent is to weed people out well before they get to a point where you could share a screen with them. He mentioned a few people "resubmitted the application", so sure this is probably an initial step.
Considering there's no (explicit) instruction forbidding or discouraging it, I'd consider the REPL solution to be perfectly valid. In fact some interview tests specifically look for this kind of problem solving.
I get it still, I'd expect some valuable signal from this test. Candidates who execute this code are likely to do so because they really want to avoid running the code in their head, not just because it's more straightforward, and that's probably a bad sign. And pasting that into an LLM instead of a REPL would be a massive red flag.
I just don't think answering "-11" here is a signal strong enough to disqualify candidates on its own.
OR:
I could run the code in the interpreter and be 100% certain.
I know what attitude I would prefer out of my developers.
[1] Yeah, I'm super cynical about stories like this, and know that many if not most are just invented shower thoughts manifested into a fictional reality.
[2] Alternately they're pasting to a normal editor -- even notepad -- for a more coherent programming font, where again the = appears.
Which I understand is my issue to work on, but if I were interviewing, I'd ask candidates to verbalize or write out their thought process to get a sense of who is overthinking or doubting themselves.
And if in your doubt you decided to run it through the interpreter to get the "real" answer, whoops, you're rejected.
I don't know then. I can open up a terminal with a python and paste it really fast, faster than run it in my head.
Safari's reader sees the =. Edge does not.
I’ve used similar tests in interviews before (a function that behaves like atoi and candidates have to figure out what it’s doing) and the good candidates are able to go over the code and hold values in their head across multiple iterations of a loop.
There are many candidates who can’t do this.
IMHO this is a dumb test
They had some leet code problem prepared and I tried solving it and failed.
During the challenge, I used some python string operand (:-1) (and maybe some other stuff) that they didn't knew.
In the end, I failed the challenge as I didn't do it in the O(n) way...
These kind of stupid challenges exemplify what's wrong with hiring these days: one interviewer, usually some "vp"/"head of" decides what is the "correct" way to write some code, when they (sometimes) themselves couldn't write a line of code (since they've been "managers" for a millennia)
ps. they actually did not know what `:-1` means ...I rest my case
Just to be clear: the main problem is not that they did not know what `:-1` was - there are many weird syntax additions with every version - understandable.
IMHO the problem is that there's usually a single interviewer that decides go/no go.
We all have biases, so leaving such an important decision (like hiring an EM) to one person is, (again IMHO) ...stupid .
That is, if you're really interested in pursuing the position.
Not only are you willing to take their tests, but you go beyond what is required, for your own benefit and edification.
That's why, when presented with the URL during the interview, you immediately load it, and right-click View Source into another tab, while simultaneously making small talk with the former CTO interviewer.
Even though you're a backender, you know enough frontend to navigate the interspersed style and html and javascript and so, you solve both puzzles, and weave into the conversation the two answers, while also deciding that this is probably not the gig for you, but who knows, let's see how they answer your questions now...
Terrified of this becoming mainstream.
Especially in the current political environment... FUCK
apothegm•2h ago
koakuma-chan•2h ago
koakuma-chan•2h ago
ano-ther•2h ago
gs17•2h ago
bmacho•2h ago
Thus, if you get the wrong answer, you "cheated" (or used reader mode)
amelius•2h ago
But then the question is, how do you reach people who filter out the job ads?
gnabgib•1h ago
It also excludes users of Lynx, cURL, possibly people using accessibility tools, those with custom/overriding style sheets.
llm_nerd•2h ago
Though a gap in their process is that, as you mentioned, various reading modes also remove styles and likewise might see it. Though a reader should filter out opacity:0 content.
gs17•2h ago
Not just an "AI solver", a Python interpreter will also do it "wrong". The idea is that it's so simple that anyone qualified should be able to do it in their head, so they get the answer without the equals sign (but IMO a qualified applicant might also know it takes 5 seconds to run it in the repl and it'd be better to be correct than to use the fewest tools, or might be using a screen reader).
Piraty•2h ago