Agreed. ‘sendOnce’ implies something very specific in most async settings and, in this interview question, is being used to mean something rather different.
As it stands, we still don't know why the server was broken in this way and why they created a work around in the client instead of fixing the server.
what is the delay actually doing? does it actually introduce bugs into that backend? how do we check that?
It’s not the ability to communicate effectively that’s at play here, it’s your ability to read your interviewer’s thoughts. Sure thing, if you work with stakeholders, you need some of that as well, but you typically can iterate with them as needed, whereas you have a single shot in the interview.
Plenty of times, at the end of the interview, I do have a better mental picture of the problem and can come up with a way better solution, but “hey, 1h has already passed so get the fuck out of here. Next!”
We’d get on calls with them and they’d be like “you can’t do multithreading!” we eventually parsed out that what they literally meant was that we could only make a single request to their API at a time. We’d had to integrate with them, and they weren’t going to fix it on their side.
(Our solve ended being a lot more complicated than this, as we had multiple processes across multiple machines that were potentially making concurrent requests.)
Far easier than the original single threaded solution - and has fault tolerance baked in cause you can run it on multiple clients
I think it has clear requirements and opportunities for nudges from the interviewer without invalidating the assessment (when someone inevitably gets tunnel vision on one particular requirement). It has plenty of ways for an interviewee to demonstrate their knowledge and solve the problem in different ways.
Ive run debounce interview questions that attempt to exercise similar competency from candidates, with layering on of requirements time allowing (leading/trailing edge, cancel, etc) and this queue form honestly feels closer to what Id expect devs to actually have built in their day to day.
We actually have this pattern in our codebase and, while we don’t have all the features on top, it’s a succinct enough thing to understand that also gives lots of opportunity for discussion.
I do agree that this is quite javascript specific though.
For this first implementation, I don't see anything ever added to the queue. Am I missing something? New task is added to the queue if the queue is not empty only, but when the queue is empty the task is executed and the queue remains empty so in the end the queue is always empty?
If there is some kind of cooperative multitasking going on, then it should be noted in the pseudo code with eg. async/await or equivalent keywords. As the code is, send() never gives back control to the calling code, until it completely finishes.
let send = (payload, callback) => fetch(...).then(callback)
fetch() returns a promise synchronously, but it's not awaited.
ALSO while JavaScript is a single threaded environment, the while solution would still basically work due to the scheduler (at least if you yield, await sleep, etc.)
It must be so boring working you
"Ok, but if you had to code something convulted and illogical..." I tend to have trouble with these sorts of black box problems not because of the challenge but because of going down the path feels wrong I would expect my day to day at the company would be surrounded by too clever solutions.
Also, recognize a minimum requirement to solve this under interview pressure is a lot of low-level futzing with Javascript async and timeout details. Not everyone comes in with that knowledge or experience, and it's fine if that is a hard requirement but it seems ancillary to the goal of "interviewing engineers". I can't imagine anyone solving this or even knowing how to prompt AI in the right ways without a fair bit of prior knowledge.
Not advocating for this in prod but in the context of a programming puzzle it can be neat.
late edit: ironically this is also a comment on the LLM talk in TFA: messing with the event loop like this can give you a strong mental model of JS semantics. Using LLMs I would just have accepted a loop and never learned about promise chains. This is the risk in using LLMs: you plateau. If you will allow a tortured metaphor: my naive understanding of SR is that you always move at light speed, but in 4 dimensions, so the faster you move in the 3D world, the slower you move through time, and vice versa. Skill is similar: your skill vector is always a fixed size (= "talent"?). If you use LLMs, it's basically flat: complete tasks fast but learn nothing. Without them, you move diagonally upwards: always improving, but slower in the "task completion" plane. Are you ready to plateau?
let isProcessing = false;
async function checkFlagAndRun(task) {
if (isProcessing) {
return setTimeout(() => checkFlagAndRun(task), 0);
}
isProcessing = true;
await task();
isProcessing = false;
}
should do the trick. You can test it with function delayedLog(message, delay) {
return new Promise(resolve => {
setTimeout(() => {
console.log(message);
resolve();
}, delay);
});
}
function test(name,num) {
for (let i = 1; i <= num; i++) {
const delay = Math.floor(Math.random() * 1000 + 1);
checkFlagAndRun(() => delayedLog(`${name}-${i} waited ${delay} ms`, delay));
}
}
test('t1',20); test('t2',20); test('t3',20);
BTW, for 4 scheduled tasks, it basically always keeps the order, and I am not sure why. Even if the first task always runs first, the rest 3 should race each other. 5 simultaneously scheduled tasks ruins the order.https://developer.mozilla.org/en-US/docs/Web/API/Window/setT...
But does anyone else get embarrassed of their career choice when you read things like this?
I've loved software since I was a kid, but as I get older, and my friends' careers develop in private equity, medicine, law, {basically anything else}, I can tell a distinct difference between their field and mine. Like, there's no way a grown adult in another field evaluates another grown adult in the equivalent mechanism of what we see here. I know this as a fact.
I just saw a comment last week of a guy who proudly serves millions of webpages off a CSV-powered database, citing only reasons that were also covered by literally any other database.
It just doesn't feel like this is right.
Now it feels like I'm back in high school, including strict irrelevant rules to be followed, people constantly checking in on you, and especially all of the petty drama and popularity contests.
Medicine has medical school after a degree, a 5+ year residency under close supervision with significant failure rates, legal liability for malpractice, and ongoing licensing requirements.
So explain to us what it is that you "know this for a fact" regarding how they have it easier. Most of the people reading this, myself included, would never have been allowed into this industry, let alone been allowed to stay in it, if the bar were as high as law or medicine.
By comparison, failing a leetcode interview means you've got to find a new company to interview with.
This particular question is a bit ill formed and confusing I will say. But that might serve as a nice signal to the candidate that they should work elsewhere, so not all is lost.
Me and you are just not of that high layer. We’re kind of laborers given those simple aptitude tests.
When I was on track to get into the higher layer 15 years ago I got that my last job just by invitation and half an hour talk with VP. Next offer and other invitations came soon the same way, yet I got lazy and stuck at the current job simplemindedly digging the trench deeper and deeper like a laborer.
On the contrary, it makes me proud. In private equity, medicine, or law, if you have the right accent and went to a good school and have some good names on your resume, you can get a job even if you're thoroughly incompetent - and if you're a genius but don't have the right credentials you'll probably be overlooked. In programming it still mostly comes down to whether you can actually program. Long may it continue.
I know the feeling.
The author says this is one of their favourite interview questions. I stop to wonder what the others are.
When I'm interviewing a candidate, I'm trying to assess really a few things: 1) the capability of the person I'm interviewing to have a technical conversation with another human being; 2) how this person thinks when they are presented with a problem they have to solve; and 3) can this person be trusted with important work?
For 1) and 2), coding interviews and the types of artificially constructed and unrealistic scenarios really aren't helpful, in my experience. I care a lot less about the person being able to solve one specific problem I hand them and I care a lot more about the person being able to handle a much more realistic scenario of being hand handed an ill-defined thing and picking it apart. Those conversations are typically much more open-ended; the goal is to hear how the person approaches the problem, what assumptions they make about the problem, and what follow-ups are needed once they realise at least one of their assumptions is wrong.
This is a really hard thing to do. For example, I imagine (but do not know) that when a medical practice hires a doctor for a certain role, there is an expectation that they already know how the human body works. For an ER doctor, you might care more about how well that person can prioritise and triage patients based on their initial symptoms. And you might also care about how that person handles an escalation when a patient presents not too awfully but is in fact seriously ill. For a GP, it's probably more important for a practice to care more about healthcare philosophy and patient care approaches rather than the prioritisation thing I mentioned above. I'm spit-balling here, but the point is these two situations are both hiring doctors. You care less about what the person knows because there is a tacit assumption that they know what they need to know; you're not giving the candidate a trial surgery or differential diagnosis (maybe... again I'm not a doctor so I don't actually know what I'm talking about here).
If I'm hiring a software engineer or performance engineer, I am trying to figure out how you approach a software design problem or a performance problem. I am not trying to figure out if you can design an async queue in a single-threaded client. This problem doesn't even generalise well to a real situation. It would be like asking a doctor to assume that a patient has no allergies.
Item number 3) is "Can this person be trusted with important work?" and this is basically impossible to determine from an interview. It's also impossible to determine from a CV. The only way to find out is to hire them and give them important work. CVs will say that a candidate was responsible for X, Y and Z. They never say what their contribution was, or whether or not that contribution was a group effort or solo. The only way to find out, is to hire. And I've hired candidates that I thought could be trusted and I was wrong. It sucks. You course-correct them and figure out how to get them to a place where they can be trusted.
Hiring is hard. It's a massive risk. Interviews only give you a partial picture. When you get a good hire it's such a blessing and reduces my anxiety. When you hire a candidate you thought would be good and turns out to be an absolute pain to manage it causes me many sleepless nights.
Is it only “async” because it’s doing it in JavaScript and the underlying network request API is asynchronous? Seems like, IMHO, a really bad way to describe the desired result since all IO in JavaScript is going to be async by default.
its certainly serialized, but nothing fancy otherwise.
it would be synchronous if you blocked the requester until the request go through the queue and then completed. you wouldnt need to introduce an async/await.
you can see examples in JS on the node FS functions. the defualt ones are async, but they have some magic wrappers that make it actually sychronous and block the event loop from running until the file is loaded
Interviewers have thought about the problem they propose countless of times (at least once per interview they have hold) each time refines their understanding of the problem, and so they become god of their tiny realm. Candidates have less than one hour, add to that stress and a single shot to get it more or less right. You’re not assessing candidate’s ability to code nor their ability to handle new requirements as they come.
Interviewer and candidate meet at time X for 1h session of “live coding”. A saas throws at them both one problem at random. Let the game begin. The company can decide if they want interviewer and candidate to collaborate together to solve the problem (the saas is the judge) or perhaps they both need to play against each other and see who gets the optimal solution.
You can add a twist (faangs most likely): if the candidate submits a “better” answer than the interviewer’s , candidate takes over their job.
An LLM could be very well behind the saas.
Oh boy, I wouldn’t feel that nervous anymore in any interview. Fairness is the trick. One feels so underpowered when you know that the interviewer knows every detail about the proposed problem. But when both have no idea about the problem? That’s levelling the field!
Corporate life meets the squid games (I quite like it:)
(Really, it shouldn't be surprising that most technical interviewers aren't that competent, since they usually aren't selected for it.)
yuck
its still not a great architecture, but its different from throttling
> this interview can be given in JavaScript or any other language
it's a language-agnostic question...but it revolves around the assumption of making a callback on request completion. which is common in JS, but if you were solving this in some other language, that's usually not idiomatic at all.
followed by:
> For candidates without JavaScript experience or doing this interview in pseudo-code, you have to tell them that there's another function available to them now with the following signature:
> declare function setTimeout(callback: () => void, delayMs: number): number;
so you add in this curveball of delaying requests (it's unclear why?), and it's trivial to solve...using a function from the JS stdlib. and if the candidate is not using JS, you need to tell them "oh there's a function from JS that you can assume is available"
> After sendOnce is implemented, it's time to make the interview a lot more interesting. This is where you can start to distinguish less skilled software engineers from more skilled software engineers. You can do this by adding a bunch of new requirements to the problem
as you originally specified it, this code is a workaround for a buggy server. and for Contrived Interview Reasons we can't modify the server at all, only the client.
in that scenario, "extend it into a generic queue with a bunch of bells and whistles" is maybe the worst design decision you could pursue? this thing, if it existed in the real world, should be named something like SingleRequestQueueForWorkingAroundHopelesslyBuggyServer with comments explaining the backstory for why it needs to exist. working around the hopelessly buggy server should be roped off into one small corner of the codebase, and not allowed to infect other code that makes normal requests to non-buggy servers.
I am not against testing deeper language understanding for a job that requires it but the layers of contrivances to make it "not only js" rightfully rubs non-js devs the wrong way. This comes from someone who loves them some js.
The AI ick at the end makes what would have been mildly interesting, incoherent and uninteresting.
Which make the whole coding exercise moot.
What if there are 1 million users opening the browser at the same time?
The queue question is fun but doing it in the client is not right.
> So, we decide to make our server's life easier by trying to ensure, from the client, that it doesn't ever have to handle more than one request at once (at least from the same client, so we can assume this is a single-server per-client type of architecture).
const lockify = f => {
let lock = Promise.resolve()
return (...args) => {
const result = lock.then(() => f(...args))
lock = result.catch(() => {})
return result.then(v => v)
}
}
Or you could promisify the send function and use normal async/await.
let q = Promise.resolve(),
sendAsync = (p) => new Promise(r => send(p, r)),
sendOnce = (p, c, ms) => setTimeout(_ => q.then(_ => sendAsync(p)).then(c), ms)
Or you could actually spin up a new worker thread and get multithreading :PThis feels both too easy and too hard for an interview? I would expect almost any new grad to be able to implement this in the language of their choice. Adding delays makes it less trivial, except that the answer is... Just use the function provided by the language. That's the right answer for real code, but what are you really assessing by asking it?
[1] https://github.com/google/guava/blob/master/guava/src/com/go...
The other examples given for fleshing it out are all pretty similar; if a candidate can do one, chances are they can do the others too. If you want to get a decent signal if candidate skill, you have to ask a question easy enough that any candidate you'd accept can answer it, then incrementally add difficulty until you've given the candidate a chance to show off the limit of their abilities (at least as applied to your question).
Otherwise you ask a too-easy question which everyone nails, then make it way too hard and everyone fails. Or you ask a too-easy question and follow it up with additional enhancements that don't actually add much difficulty, and again all the candidates look similar. That's just my experience; the author seems pleased with the question so maybe they're getting good signal out of it.
I was informed by the radio firmware guys that a certain kind of request from the host could not be handled concurrently by the radio module due to an unchecked conflict over some global piece of memory or whatever.
I create a wait-free circular buffer for serializing the requests of that type, where the replies from the previous request would kick down the next one.
No mutexes, only atomic compare-swap.
I think this question is a bit confusing in its wording even though the concept is actually quite useful in practice. First, async queues have nothing to do with network coms. You can have a async queues for local tasks.
Also while it is obvious to most that you shouldn’t do this, you can also satisfy the requirements to this task by polling the queue and flag using setTimeout() or setInterval(): on invocation, check if there is anything in the queue and if so, if we aren’t waiting on a response fire off the next send().
Retry logic with this system is always a problem. Do you block the queue forever by retrying a request that will never complete (which lets the queue grow infinite in size), or do you give up after some number of retired? If you give up, does that invalidate all queued requests? Some? None? This becomes application-specific. For this kind of thing I have implemented it using multiple parallel queues. That is, you request a send() but using a specifically named queue so that if one queue’s serialized requests break, other queues aren’t affected.
If you do something like `sendOnce(payloadA, callbackA, 5000); sendOnce(payloadB, callbackB, 1);` should payloadB be sent in 1ms or 5000 + RTT + 1ms?
You could solve this in the JavaScript environment by using something like WebSockets or WebTransport much more trivially than by using send() which is I assume a thinly veiled fetch(). This probably fails OP’s interview but in reality leverages the lower level queueing/buffering.
A more fun and likely more illuminating question would be to do something like provide a version of send() that uses a callback for the response and ask to convert it to a promise. This is a really fun one that I had to deal with when using WebCodecs: a video decoder uses callbacks to give you frames but for example Safari has a bug where it will return frames that are encoded as delta frames out of presentation order. So the much better API is to feed a bunch of demuxed encoded chunks to a wrapper around VideoDecoder, and then wait for the resolution (or rejection) of a promise where the result is all the decoded frames at once. This problem really gets at the concept of callbacks vs promises which I think is the right level of abstraction for evaluating how someone thinks of single threaded concurrency. You also can get a really good feel for a person’s attitude here if they refuse to use callbacks or promises (or the async/await sugar around promises).
Of course on the embedded side you're likely using C, quite likely an RTOS and thus threads, but if you're just using a superloop then you've got a single-threaded system (though with the complication of interrupt handlers) a bit like the interview asks about. I'd probably use a state machine for this with a superloop design, just about everything "async" in embedded boils down to writing a state machine & polling it. Actually writing a fully general-purpose async queue for embedded systems is rather more work, because you'll have to consider how it can be used from within the interrupt context. You really shouldn't block in an interrupt context, so all the queue operations need to be non-blocking. That turns it into something far too complex for an interview question.
dudeinjapan•7h ago
davidgomes•6h ago
ramon156•6h ago
dudeinjapan•6h ago
jonchurch_•6h ago
“… it doesn't ever have to handle more than one request at once (at least from the same client, so we can assume this is a single-server per-client type of architecture).“
For sure a multithreaded async queue would be a very interesting interview, but if you started with the send system the interview is constructed around youd run out of time quickly.