In theory, this should be the easiest discussion to pass while interviewing if you've done some kind of related work or qualified. Instead it's just turning into another insane cargo culted game where you need to do things according to some weird rules and hit certain buzzwords.
Your comment reads as if you think system design interviews are good if you can pass them but they are bad if you fail them.
I think you're trying to rarionalize away the fact that system design involves knowing patterns and how to apply them.
In each and every single technical field, it's good to be able to improvise but it's even better to know what you are doing.
What you dismiss as "scripted / memorized hoop-jumping experience" actually translates to theoretical and practical knowledge to solve specific problems. Improvisation is a last-resort to fix problems you never faced before.
So yeah, this blend of criticism boils down to complaining that the only good tests are the ones you can pass, and everyone who is more experienced, prepared, and outright competent should not given preferential treatment.
First person can check lots of boxes in interview but could struggle when rubber would meet the road. Second one might look bad, but he can always check wikipedia when needed. (replace wikipedia with AI if needed)
Is blindly copying the same diagrams seen in various GOTO conference slides and ByteByteGo system design newsletters to make sure all the expected parts of your 'architecture' are there really knowing what you are doing, or is it just cargo culting at the level of system design?
I think the answer lies somewhere in the middle. I don't disagree that you should be aware of these things, but drawing out the same system diagram as Netflix without actually having measured a real system to understand where the hotspots are is ultimately just guessing, at best following a known pattern without evidence it's required.
Plus if the question is being asked by a company that genuinely could survive on HAProxy and a couple of efficient load-balanced monoliths then it really is cargo culting, especially if you end up with something more complicated then actually required.
Do you see any evidence that candidates are blindly copying stuff around?
Also, you fail to offer any explanation on why studying systems design topics is supposedly inferior to not studying and just expecting to wing it at job interviews. You only assert that hypothetical candidates indeed have a broader technical background than you, but their knowledge and expertise are useless when compared to your uneducated improvisation skills.
Explain why you expect it to make sense?
> I think the answer lies somewhere in the middle. I don't disagree that you should be aware of these things, but drawing out the same system diagram as Netflix without actually having measured a real system to understand where the hotspots are is ultimately just guessing, at best following a known pattern without evidence it's required.
Why do you believe this hypothetical scenario is a concern? I mean, either this approach is useless and candidates have no advantage in following it, or this approach is enough to get candidates to outperform you at job interviews. In both scenarios, why do you think that others regurgitating information is a problem?
They also seemed annoyed that I am asking them questions that are not part of the problem statement instead of getting down to drawing a fancy diagram.
I'm sure they hired someone who drew a lot of boxes with cute logos because webscale.
It's nice that you estimated throughputs and provided your assessment. Odds are that's not the problem you were tasked to solve?
I mean, arguing your way out of proposing a design for scale reads as if you're at a test and you're claiming you shouldn't be asked some questions.
> They also seemed annoyed that I am asking them questions that are not part of the problem statement instead of getting down to drawing a fancy diagram.
Clarifying questions are an expected part of the process, but by it's own nature the design process is iterative and has a initial design on where to iterate over. If you do not show forward progress or present any tangible design, your own system design skills are immediately called into question as, in the very least, it shows you're succumbing to analysis paralysis.
Think about it: you're presented with a problem, and your contribution is to complain about the problem instead of actually offering a solution? What do you think is the output valued by interviewers?
The fact that this ^^^ is run by AI shows just how irrelevant technical interviews like this are these days. Why take a closed book test for a job that you never have to work a day in your life with the book closed?
If you are an experienced dev, u can rank other devs pretty quickly with a general conversation about any technology.
Please explain why do you believe that not moving forward with a candidate who fails basic coding challenges leads to "monocultures of echo chambers".
> The fact that this ^^^ is run by AI shows just how irrelevant technical interviews like this are these days. Why take a closed book test for a job that you never have to work a day in your life with the book closed?
You're trying to fabricate scenarios that never applied. There are no "book closed" exams, and no company hires developers on hard skills alone.
> If you are an experienced dev, u can rank other devs pretty quickly with a general conversation about any technology.
No. No, you don't. You think you can, but you're just deluding yourself. You have no idea if your personal biases got your best candidate rejected, or if your pick just is a top of the barrel candidate who enchanted you with buzzword lingo? This is a fact, and I've seen this play out in real life. I've seen team leads succumb to this sort of delusion and ehen they realize they hired smooth-talking scrubs that need constant help from junior devs to unblock themselves, you start to hear excuses such as "he was given an opportunity but squandered it". This is something that negatively impacts everyone involved.
- In a fast system most of the latency will come from geographical distance, an HA pair can't serve the whole world with low latency.
- How do you handle new releases? You can do HA with 2 machines, or you can do safe rollouts with blue/green machines, but you can't do HA and safe rollouts with just 2 machines.
- What if you want to test a new release with 1% of traffic? 100 small machines might be preferable to 1 big machine, or 10 medium machines each running 10 instances, or whatever.
- What does the failover look like?
- You use an "external managed data store", but often the tricky bit is the data store. Externalising this may be the most practical option, but it doesn't communicate that you know how it needs to function.
Alternatively this might have just been a bad interview. Many are.
No, they are expected to be presented with a set of requirements and present a solution that loosely meets them. That is used as a backdrop to ask technical questions and see how a person collaborates.
What about new tax policy to encourage people having more children? How can you deliver at scale if the scale is shrinking?.
Really you should think of all factors
"system design" as it's done now by everyone is a bad interview whether I can pass it or not. You just need to hit the right buzzwords or draw the right boxes. How is that effective?
What do you think a technical interview is? Everyone steps into a job interview hoping to outperform all other candidates in whatever criterias they will be evaluated on. Vibes play a role, but don't you think that being able to demonstrate technical skills matters?
Typically, FAANG (and wannabe-FAANG) companies have overindexed on algorithm questions; which has been meant to draw people from a research/mathematics-heavy background which was well aligned with their initial needs by which they committed a sort of "original sin."
Since then they've forgotten to balance these requirements against the business applications they build, which require CRUD-heavy work and usually requires knowledge of how systems (databases, queues, performance tuning, load testing, etc.) work. Because said companies continue to overindex on algorithm skills, this means that aforementioned knowledge is deprioritized.
When systems fail to deliver the expected performance, these software engineers seeking new solutions with wild tradeoffs that may not have been required if they were able to come up with the right system model for the software. As an example, at two FAANGs that I've closely studied (and one of which I worked at), people always seem to be selecting serverless functions for scalability, and then adding queues and another set of serverless function to work the queues and slowly write to a database, only to work around the fact that it may have been easier to just have a background thread in a server-based model that processes results and commits them. (This is just one example; I'm sure there are cases where a queue may have been necessary to facilitate an async process. This doesn't discount the fact that serverless is usually the wrong model for most stuff out there.)
It also has a secondary effect on the software engineering market, where said engineers are now able to market their overly complicated solutions, selling it as THE way to build software and dismissing everything else as a toy. This also helps said FAANGs to market their cloud services; with sales people excitedly speaking about some supposed "innovation".
I remember a video from a FAANG organized event where they talked about how their database supports PITR upto the second. At the time, I was young, impressionable and lacked said knowledge, and came away mesmerized about how they could do this and thinking about how I could never write code to do that. Many years later, having read some introductory material about distributed systems and MySQL internals, I happened to remember "why wouldn't any database worth it's salt support PITR; after all, it's just replicating the WAL entries to another host!"
However, said practices have already taken hold, and it has lead to a generation of engineers who would continue to seek cloud services and design overly complicated solutions.
With all due respect, why would I pay $5 a month for something that self-implodes as soon as I hit one button?
Did not expect this to be gaining traction! Working to get the bugs fixed.
Using the beta/preview gemini live audio models - so there have been some hiccups
In the past I have generally just had a list of dimensions to help the candidate explore, like separation of concerns, scalability on different system dimensions (concurrent txns, storage, memory, etc), analogies to existing systems/patterns, etc.
I usually have never had a script, merely a problem with a fairly generous solution space and a list of increasingly more difficult to satisfy requirements in order to pressure even the best candidates just a little.
HR has for the last decade tried to completely ignore that and instead try quantify candidates "goodness" with scores, scripts and other bullshit. This has had the rather obvious outcome in missing really good folks that didn't fit into their box and hiring utter trash that gamed their stupid metrics. They keep telling me this is "industry standard" and "how Google does it", but only the latter of that is actually true, the former was forced for no reason whatsover. They conveniently leave out that the reason Google did this for so long is they completely over-indexed on hiring fresh graduates with no experience, little to no intuition or real world knowledge and as such needed to entirely focus on IQ-test-esque questions to just try filter for the top X% of otherwise indistinguishable candidates. None of which is relevant for small teams hiring 10yr+ industry seniors with relevant domain expertise.
Interviews are meant to be about working out if someone will be successful on your team, that means determining if they have the technical chops, a decent enough communication style and enough experience/intuition to work in unfamiliar problem spaces effectively.
Really all you need is the vibe check, a good collaborative systems design exercise helps explore that vibe and quickly separates the pretenders from people with the required knowledge and intuition.
No links to the Terms of Service and Privacy Policy.
No way to preview without signing in.
Only way to sign in is with Google.
"Trusted by engineers landing jobs at" ...given how new this is, is this line marketing fluff or is there evidence that engineers actually trust LeetSys?
Finally, I'm worried that this will make system design interviews as miserable as coding interviews.
Semi off topic, but is it really legal to put Google/Netflix/etc's logos on their website like that?
The goal is absolutely the opposite of making system design interviews more miserable. Right now, preparing for these interviews is really gatekept – either you need friends in big tech willing to help you practice, or you have to pay hundreds of dollars for one-on-one sessions.
We're trying to change that, now that its possible with AI.
So I'm probably not your target audience (I haven't had to interview for a software eng job for at least 7-8 years), but from my past experiences in these interviews from that time, system design is typically the most sensible part of the interview process and it really does just test if you have experience solving those problems, or if you're less senior, if you can think at least through the problems logically. There are even pretty good system design interview books written out there. What do you think is gatekept about this?
Leetcode took off as a trend because it wasn't easy for company interviewers to come up with good coding questions, partly because the daily work of most software engineers in most companies simply don't involve much tricky programming. System design is the opposite, most software engineers have to work on system design in their daily work and most companies can simply ask system questions very related to their companies' problems.
Looking at it from this lens, I'm not sure what a leetcode for system design adds in value, that a system design interview book doesn't already give.
What does leetcode for code add if you can read a book about algorithms?
Rhetorical question, obviously. Practice makes perfect, reading about practice doesn’t. Interviews are not even close to what an engineer’s job looks like, so interviews need to be approached just like any other skill you need to learn i.e. practice.
I bet this is not marketing, just an artifact of intensive vibe-coding
jedberg•2h ago
rbajp•1h ago
rbajp•1h ago