Ahmen! I attend this same church.
My favorite professor in engineering school always gave open book tests.
In the real world of work, everyone has full access to all the available data and information.
Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.
Doing this is called "engineering". And this is what this professor taught.
- something we've been doing since forever
- the latest trend that can be picked up just-in-time if you'll ever need it
Times have not changed. This is still the focus of teacher prep programs.
I miss her jokes against anxious nerds that just wanted to code :(
Don't forget the rise of boot camps where some educators are not always aligned with some sort of higher ethical standards.
Years ago I started on a new team as a senior dev, and did weeks of pair programming with a more junior dev to intro me to the codebase. His approach was maddening; I called it "spray and pray" development. He would type out lines or paragraphs of the first thing that came to mind just after sitting down and opening an editor. I'd try to talk him into actually taking even a few minutes to think about the problem first, but it never took hold. He'd be furiously typing, while I would come up with a working solution without touching a keyboard, usually with a whiteboard or notebook, but we'd have to try his first. This was c++/trading, so the type-compile-debug cycle could be 10's of minutes. I kept relaying this to my supervisor, but after a few months of this he was let go.
Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding. Of course I’m biased as a dual cs/philosophy major but it’s very rare I’m looking for someone who can just write a lot of code. Especially juniors as analytical thinking is way harder to teach than how to program.
Teaching people to think is perhaps the world's most under-rated skill.
I told both of my (step)sons that I would only help them pay for college or trade school - their choice - if they were getting a degree in something “useful”. Not philosophy, not Ancient Chinese Art History etc.
I also told them that they would have to get loans in their own names and I would help them pay off the loans once they graduated and started working gainfully.
We have these sprint planning meetings and the like where we throw estimates on the time some task will take but the reality is for most tasks it's maybe a couple dozen lines of actual code. The rest is all what I'd call "social engineering" and figuring out what actually needs to be done, and testing.
Meanwhile upper management is running around freaking out because they can't find enough talent with X years of Y [language/framework] experience, imagining that this is the wizard power they need.
The hardest problem at most shops is getting business domain knowledge, not technical knowledge. Or at least creating a pipeline between the people with the business knowledge and the technical knowledge that functions.
Anyways, yes I have 3/4 a PHIL major and it actually has served me well. My only regret is not finishing it. But once I started making tech industry cash it was basically impossible for me to return to school. I've met a few other people over the years like me, who dropped out in the 90s .com boom and then never went back.
Unfortunately in my experience, many, many people do not see it that way. It's very common for folks to think of philosophy as "not useful / not practical".
Many people hear the word "philosophy" and mentally picture "two dudes on a couch recording a silly podcast", and not "investigative knowledge and in-depth context-sensitive learning, applied to a non-trivial problem".
It came up constantly in my early career, trying to explain to folks, "no, I actually can produce good working software and am reasonably good at it, please don't hyper-focus on the philosophy major, I promise I won't quote Scanlon to you all day."
The way you are perceived by others dependa on your behaviour. If you wamt to be perceived differently, adjust your behaviour, don't demand others to change. They won't.
The humanities, especially the classic texts, cover human interaction and communication in a very compact form. My favorite sources are the Bible, Cicero, and Machiavelli. For example Machiavelli says if you do bad things to people do them at once, while good things you should spread out over time. This is common sense. Once you catch the flavor of his thinking it's pretty easy to work other situations out for yourself, in the same why that good engineering classes teach you how to decompose and solve technical problems.
Unlike my teachers, none of my bosses ever put me in an empty room with only a pencil and a sheet of paper to solve given problems.
Memorization is not a panacea. I never found memorizing l33t code problems to be edifying. I think it's because those kinds of tight, self-referential, clever programs are far removed from the activity of writing applications. Most working programmers do not run into a novel algorithm problem but once or twice a career. Application programming has more the flavor of a human-mediated graph-traversal, where the human has access to a node's local state and they improvise movement and mutation using only that local state plus some rapidly decaying stack. That is, there is no well-defined sequence for any given real-world problem, only heuristics.
I remember being given a proof of why RSA encryption is secure. All the other students just regurgitated it. It made superficial sense I guess.
However, I could not understand the proof and felt quite stupid. Eventually I went to my professor for help. He admitted the proof he had given was incomplete (and showed me why it still worked). He also said he hadn't expected anyone to notice it wasn't a complete proof.
I think you two are agreeing. GP said that they found they couldn't memorize something until they actually understood it
> I find it hard to memorise things I don't actually understand.
Isn't it the parent's point?
If it doesn’t work for you on l33t code problems, what techniques are you finding more effective in that case?
As a concrete example, there is a class of problems that are well served by dynamic programming. So we would review specific examples like Dijkstra's algorithm for shortest path. Or Wagner–Fischer algorithm for Levenshtein-style string editing. But we would also learn, often via these concrete examples, of how to classify and structure a problem into a dynamic programming solution.
I have no idea if this is what is meant by "l33t code solutions", but I thought it would be a helpful response anyway. But the bottom line is that these are not common in industry, because hard computer science is not necessary for typical business problems. The same way you don't require material sciences advancements to build a typical house. Instead it flows the other way, where advancements in materials sciences will trickle down to changing what the typical house build looks like.
Memorization of l33t code DOES work well as prep for l33t code tests. I just don't think l33t code has much to do with application programming. I've long felt that "computer science" is physics for computers, low on the abstraction ladder, and there are missing labels for the higher complexity subjects built on it. Imagine if all physical sciences were called "physics" and so in order to get a job as a biologist you should expect to be asked questions about the Schroedinger equation and the standard model. We desperately need "application engineering" to be a distinct subject taught at the university level.
A dominant majority in public schools starting late 1970s seems to follow the "Lying to Children" approach which is often mistakenly recognized as by-rote teaching but are based in Paulo Freire's works that are in turn based on Mao's torture discoveries from the 1950s.
This approach contrary to classical approaches leverages torturous process which seems to be purposefully built to fracture and weed out the intelligent individual from useful fields, imposing sufficient thresholds of stress to impose PTSD or psychosis, selecting for and filtering in favor of those who can flexibly/willfully blind/corrupt themselves.
Such sequences include Algebra->Geometry->Trigonometry where gimmicks in undisclosed changes to grading cause circular trauma loops with the abandonment of Math-dependent careers thereafter, similar structures are also found in Uni, for Economics, Business, and Physics which utilize similar fail-scenarios burning bridges where you can't go back when the failure lagged from the first sequence, and you passed the second unrelated sequence. No help occurs, inducing confusion and frustration to PTSD levels, before the teacher offers the Alice in Wonderland Technique, "If you aren't able to do these things, perhaps you shouldn't go into a field that uses it". (ref Kubark Report, Declassified CIA Manual)
Have you been able to discern whether these "patterns" as you've called them aren't just the practical reversion to the classical approach (Trivium/Quadrivium)? Also known as the first-principles approach after all the filtering has been done.
To compare: Classical approaches start with nothing but a useful real system and observations which don't entrench false assumptions as truth, which are then reduced to components and relationships to form a model. The model is then checked for accuracy against current data to separate truth from false in those relationships/assertions in an iterative process with the end goal being to predict future events in similar systems accurately. The approach uses both a priori and a posteriori components to reasoning.
Lying to Children reverses and bastardizes this process. It starts with a single useless system which contains equal parts true and false principles (as misleading assumptions) which are tested and must be learned to competency (growing those neurons close together). Upon the next iteration one must unlearn the false parts while relearning the true parts (but we can't really unlearn, we can only strengthen or weaken) which in turn creates inconsistent mental states imposing stress (torture). This is repeated in an ongoing basis often circular in nature (structuring), and leveraging psychological blindspots (clustering), with several purposefully structured failings (elements) to gatekeep math through torturous process which is the basis for science and other risky subject matter. As the student progresses towards mastery (gnosis), the systems become increasingly more useful. One must repeatedly struggle in their sessions to learn, with the basis being if you aren't struggling you aren't learning. This mostly uses a faux a priori reasoning without properties of metaphysical objectivity (tied to objective measure, at least not until the very end).
If you don't recognize this, an example would be the electrical water pipe pressure analogy. Diffusion of charge in-like materials, with Intensity (Current) towards the outermost layer was the first-principled approach pre-1978 (I=V/R). The Water Analogy fails when the naive student tries to relate the behavior to pressure equations that ends up being contradictory at points in the system in a number of places introducing stumbling blocks that must be unlearned.
Torture being the purposefully directed imposition of psychological stress beyond a individuals capacity to cope towards physiological stages of heightened suggestability and mental breakdown (where rational thought is reduced or non-existent in the intelligent).
It is often recognized by its characteristic subgroups of Elements (cognitive dissonance, a lack of agency to remove oneself and coercion/compulsion with real or perceived loss or the threat thereof), Structuring (circular patterns of strictness followed by leniency in a loop, fractionation), and Clustering (psychological blindspots).
Can you provide some concrete examples of it?
I think there is validity to the approach but sciences would be much, much improved if taught more like history lessons. Here is how we used to think about gravity, here's the formula and it kind of worked, except... Here is planetary orbits that we used to use when we assumed they had to be circles. Here's how the data looked and here's how they accounted for it...
This would accomplish two goals - learning the wrong way for immediate use (build on sand) and building an innate understanding of how science actually progresses. Too little focus is on how we always create magic numbers and vague concepts (dark matter, for instance) to account for structural problems we have no good answer for.
Being able to "sniff the fudge" would be a super power when deciding what to write a PhD on, for instance. How much better would science be if everyone strengthened this muscle throughout their educatuon?
Also, In Algebra I've seen a flawed version of mathematical operations being taught that breaks down with negative numbers under multiplication (when the correct way is closed over multiplication). The tests were supposedly randomized (but seemed to target low-income demographics). The process is nearly identical, but the answers ultimately not correct. The teachers graded on the work to the exclusion of the correct answer. So long as you showed the correct process expected in Algebra you passed without getting the right answer. Geometry was distinct and unrelated, and by Trigonometry the class required correct process and answer. You don't find out there is a problem until Trigonometry, and the teacher either doesn't know where the person is failing comprehension, or isn't paid to reteach a class they aren't paid for but you can't go back.
I've seen and heard horror stories of students where they'd failed Trig 7+ times at the college level, and wouldn't have progressed if not for a devoted teacher helping them after-hours (basically correcting and reteaching Algebra). These kids literally would break out in a cold PTSD sweat just hearing the associated words related to math.
Only when I got late twenties, I realized how wrong he was. Memorization and understanding go hand in hand, but if one of them has to come first than it's memorization. He probably said that because that was what kids (who were forced to do rote memorization) wanted to hear.
It is What you memorize that is important, you can't have a good discussion about a topic if you don't have the facts and logic of the topic in memory. On the other hand using memory to paper over bad design instead of simplifying or properly modularizing it, leads to that 'the worst code I have seen is code I wrote six months ago' feeling.
This may be true of mathematical proofs, but it surely must not be true in general. Memorizing long strings of digits of pi probably isn’t much easier if you understand geometry. Memorizing famous speeches probably isn’t much easier if you understand the historical context.
Not commenting on the merits of critical thinking vs memorization either way, but I think it would be meaningfully easier to memorize famous speeches if you understand the historical context.
There is also something to the practice of reproducing something. I always took this as a form of "machine learning" for us. Just as you get better at juggling by actually juggling, you get better at thinking about math by thinking about math.
As you discovered: A properly structured memorization of carefully selected real world material forces you to come up with tricks and techniques to remember things. With structured information (proofs in your case) you start learning that the most efficient way to memorize is to understand, which then reduces the memorization problem into one of categorizing the proof and understanding the logical steps to get from one step to another. In doing so, you are forced to learn and understand the material.
Another controversial take (for HN, anyway) is that this is what happens when programmers study LeetCode. There’s a meme that the way to interview prep is to “memorize LeetCode”. You can tell who hasn’t done much LeetCode interviewing if they think memorizing a lot of problems is a viable way to pass interviews. People who attempt this discover that there are far too many questions to memorize and the best jobs have already written their own questions that aren’t out of LeetCode. Even if you do get a direct LeetCode problem in an interview, a good interview will expect you to explain your logic, describe how you arrived at the solution, and might introduce a change if they suspect you’re regurgitating memorized answers.
Instead, the strategy that actually works is to learn the categories of LeetCode style questions, understand the much smaller number of algorithms, and learn how to apply them to new problems. It’s far easier to memorize the dozen or so patterns used in LeetCode problems (binary search, two pointers, greedy, backtracking, and so on) and then learn how to apply those. By practicing you’re not memorizing the specific problems, you’re teaching yourself how to apply algorithms.
Side note: I’m not advocating for or against LeetCode, I’m trying to explain a viable strategy for today’s interview format.
I still don’t like leetcode, though.
Code-wise, I spent a lot of time in college reading other people's code. But no memorization. I remember David Betz advsys, Tim Budd's "Little Smalltalk", and Matt Dillon's "DME Editor" and C compiler.
Coding, doctors, plumber… different information, often similar skill sets.
I worked a job doing tech support for some enterprise level networking equipment. It was the late 1990s and we were desperate for warm bodies. Hired a former truck driver who just so happened to do a lot of woodworking and other things.
Great hire.
A regular old competent developer can quickly pick up whatever stack is used. After all, they have to; Every company is their own bespoke mess of technologies. The idea that you can just slap "15 years of React experience" on a job ad and that the unicorn you get will be day-1 maximally productive is ludicrous. There is always an onboarding time.
But employers in this field don't "get" that. For regular companies they're infested by managers imported from non-engineering fields, who treat software like it's the assembly line for baking tins or toilet paper. Startups, who already have fewer resources to train people with, are obsessed with velocity and shitting out an MVP ASAP so they can go collect the next funding round. Big Tech is better about this, but has it's own problems going on and it seems that the days of Big Tech being the big training houses is also over.
It's not even a purely collective problem. Recruitment is so expensive, but all the money spent chasing unicorns & the opportunity costs of being understaffed just get handwaved. Rather spend $500,000 on the hunt than $50,000 on training someone into the role.
And speaking of collective problems. This is a good example of how this field suffers from having no professional associations that can stop employers from sinking the field with their tragedies of the commons. (Who knows, maybe unions will get more traction now that people are being laid off & replaced with outsourced workers for no legitimate business reason.)
Capex vs opex, that's the fundamental problem at heart. It "looks better on the numbers" to have recruiting costs than to have to set aside a senior developer plus paying the junior for a few months. That is why everyone and their dog only wants to hire seniors, because they have the skillset and experience that you can sit their ass in front of any random semi fossil project and they'll figure it out on their own.
If the stonk analysts would go and actually dive deep into the numbers to look at hiring side costs (like headhunter expenses, employee retention and the likes), you'd see a course change pretty fast... but this kind of in-depth analysis, that's only being done by a fair few short-sellers who focus on struggling companies and not big tech.
In the end, it's a "tragedy of the commons" scenario. It's fine if a few companies do that, it's fine if a lot of companies do that... but when no one wants to train juniors any more (because they immediately get poached by the big ones), suddenly society as a whole has a real and massive problem.
Our societies are driven into a concrete wall at full speed by the financialization of every tiny aspect of our lives. All that matters these days are the gods of the stonk market - screw the economy, screw the environment, screw labor laws, all that matters is appearing "numbers go up" on the next quarterly.
I have been in the various nooks and crannies of the Internet/Software dev industry my whole career (i'm 49). I can't think of any time when the stock market didn't drive software innovation. It's always been either invent something -> go public -> exit or invent something -> increase stock price of existing public corp
Yes, but today more and more is invent something -> achieve dominance -> get bought up by an even larger megacorp. That drives the enshittification circle.
Someone's cousin, lets leave it at that, someones damn cousin or close friend, or anyone else with merely a pulse. I've had interviews where the company had just been turned over from people that mattered, and you. could. tell.
One couldn't even tell me why the project I needed to do for them ::rolleyes::, their own code boilerplate(which they said would run), would have runtime issues and I needed to self debug it to even get it to a starting point.
Its like, Manager: Oh heres this non-tangential thing that they tell me you need to complete before I can consider you for the positon.... Me: Oh can I ask you anything about it?.... Manager: No
Forcing kids to sit and memorize facts isn’t suddenly going to make them a better thinker, but much of my process of being a better thinker is something akin to sitting around and memorizing facts. (With a healthy dose of interacting substantively and curiously with said facts)
My experience as a professor and a student is that this doesn't make any difference. Unless you can copy verbatim the solution to your problem from the book (which never happens), you better have a good understanding of the subject in order to solve problems in the allocated time. You're not going to acquire that knowledge during your test.
Exactly the point of his test methodology.
What he asked of students on a test was to *apply* knowledge and information to *unique* problems and create a solution that did not exist in any book.
I only brought 4 things to his tests --- textbook, pencil, calculator and a capable, motivated and determined brain. And his tests revealed the limits of what you could achieve with these items.
However most of us are not in that situation. It is better for us to just look up those details as we need them because it gives us more room to handle a broader variety of situations.
I think the problem with this is that it requires the professor to mentally fully engage when marking assignments and many educators do not have the capacity and/or desire to do so.
The bigger your mental toolbox the more effective you will be at solving the problems. Looking up a tool and learning just enough to use it JIT is much slower than using a handy tool that you already masterfully know how to use.
This is as true for physical tools as for programming concepts like algorithms and data structures. In the worst case you won’t even know to look for a tool and will use whatever is handy, like the proverbial hammer.
Very anecdotally, but I hazard that most of these types of low-hanging fruit, low-value add roles are much less common since they tended to be blockers for operational improvement. Six-sigma, Lean, various flavors of Agile would often surface these low performers up and they either improved or got shown the door between 2005 - 2020.
Not that everyone is 100% all the time, every day, but what we are left with is often people that are highly competent at not just their task list but at their job.
(I do mean memorisation fairly broadly, it doesn't have to mean reciting a meaningless list of items.)
Ahh, but this is part of the problem. Yes, they have access, but there is -so much- information, it punches through our context window. So we resort to executive summaries, or convince ourselves that something that's relevant is actually not.
At least an LLM can take full view of the context in aggregate and peel out signal. There is value there, but no jobs are being replaced
I've had a frustrating experience the past few years trying to hire junior sysadmins because of a real lack of problem solving skills once something went wrong outside of various playbooks they memorized to follow.
I don't need someone who can follow a pre-written playbook, I have ansible for that. I need someone that understands theory, regardless of specific implementations, and can problem solve effectively so they can handle unpredictable or novel issues.
To put another way, I can teach a junior the specifics of bind9 named.conf, or the specifics of our own infrastructure, but I shouldn't be expected to teach them what DNS in general is and how it works.
But the candidates we get are the opposite - they know specific tools, but lack more generalized theory and problem solving skills.
in developing areas, they actually implement more modern models commonly, as its newer and free to implement newer things.
those newer models focus more on exactly this. teach a person how to go through the process of finding solutions. rather than 'knowing a lot to enable the process of thinking'.
not saying what is better or worse, but reading this comment and article it reminds me of this.
a lot of people i see, they know tons of interesting things, but anything outside of their knowledge is a complete mystery.
all the while ppl from developing areas learn to solve issues. alot of individuals from there also, get out of their poverty and do really well for themselves.
ofcourse, this is a generalization and doesnt hold up in all cases. but i cant help think about it.
a lot of my colleagues dont know how to solve problems simply because they dont RTFM. they rely on knowledge from their education which is already outdated before they even sign up.. i try to teach them to RTFM. it seems hopeless. they look at me , downwards, because i have no papers. but if shit hits the fan, they come to me. solve the prolbem.
a wise guy i met once said (likely not his words). there are 2 type of ppl. those who think in problems, and those who think in solutions.
id related that to education, not prebaked human properties.
[1]: https://www.shrm.org/topics-tools/news/technology/ai-will-sh...
“My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.” - Matt Garman
"We will need fewer people doing some of the jobs that are being done today” - Amazon CEO Andy Jassy
Maybe they differ in degree but not in sentiment.
From the perspective of a former employee. I knew that going in though. I was 46 at the time, AWS was my 8th job and knowing AWS’s reputation from 2nd and 3rd hand information, I didn’t even entertain an opportunity that would have forced me to relocate.
I interviewed for a “field by design” role that was “permanently remote” [sic].
But even those positions had an RTO mandate after I already left.
There's an endless series of one pagers with this idea or that idea, but from what I witnessed first hand, the ones that stuck were the ones that made money.
Jassy was a decent guy when I was there, but that was a decade ago. A CEO is a PR machine more than anything else, and the AI hype train has been so strong that if you do anything other than saying AI is the truth, the light and the way, you lose market share to competitors.
AI, much like automation in general, does allow fewer people to do more, but in my experience, customer desires expand to fill a vacuum and if fewer people can do more, they'll want more to the point that they'll keep on hiring more and more people.
If you're quoting something, the only ethical thing to do is as verbatim as possible and with a sufficient amount of context. Speeches should not be cleaned up to what you think they should have said.
Now, the question of who you go to for quotes, on the other hand .. that's how issues are really pushed around the frame.
All of sudden to ensure better support and separation of concerns people needed a team with a manager for each service. If this hadn't been the case, the industry as a whole can likely work with 40% - 50% less people eventually. Thats because at any given point in time even with a large monolithic codebase only 10 - 20% of the code base is in active evolution, what that means in microservices world is equivalent amount teams are sitting idle.
When I started out huge C++ and Java code bases were pretty much the norm, and it was also one of the reasons why things were hard and barrier to entry high. In this microservices world, things are small enough that any small group of even low productivity employees can make things work. That is quite literally true, because smaller things that work well don't even need all that many changes on a everyday basis.
To me its these kind of places that are in real trouble. There is not enough work to justify keeping dozens to even hundreds of teams, their managements and their hierarchies all working for quite literally doing nothing.
I think sometimes the definition of work gets narrowed to a point so infinitesimal that everyone but the speaker is just a lazy nobody.
There was an excellent article on here about working at enterprise scale. My experience has been similar. You get to do work that feels really real, almost like school assignments with instant feedback and obvious rewards when you're at a small company. When I worked at big companies it all felt like bullshit until I screwed it up and a senator was interested in "Learning more" (for example).
The last few 9s are awful hard to chase down and a lot of the steps of handling edge case failures or features are extremely manual.
I think it depends on the industry. In safety critical systems, you need to be testing, making documentation, architectural artifacts, meeting with customers, etc
There's not that much idle time. Unless you mean idle time actually writing code and that's not always a full time job.
https://nordicapis.com/the-bezos-api-mandate-amazons-manifes...
1.) Elon fired 80% of twitter and 3 years later it still hasn't collapsed or fallen into technical calamity. Every tech board/CEO took note of that.
2.) Every kid and their sister going to college who wants a middle class life with generous working conditions is targeting tech. Every teenage nerd saw those over employed guys making $600k from their couch during the pandemic.
Edit: huh, what’s with the downvote, is this wrong? Did I overstate it? Here’s the data: https://www.demandsage.com/twitter-employees/
You're committing the classic fallacy around microservices here. The services themselves are simpler. The whole software is not.
When you take a classic monolith and split it up into microservices that are individually simple, the complexity does not go away, it simply moves into the higher abstractions. The complexity now lives in how the microservices interact.
In reality, the barrier to entry on monoliths wasn't that high either. You could get "low productivity employees" (I'd recommend you just call them "novices" or "juniors") to do the work, it'd just be best served with tomato sauce rather than deployed to production.
The same applies to microservices. You can have inexperienced devs build out individual microservices, but to stitch them together well is hard, arguably harder than ye-olde-monolith now that Java and more recent languages have good module systems.
Big businesses don’t inherently require the complexity of architecture they have. There is always a path-dependent evolution and vestigial complexity proportional to how large and fast they grew.
The real purpose of large scale architecture is to scale teams much moreso than business logic. But why does headcount grow? Is it because domains require it? Sure that’s what ambitious middle managers will say, but the real reason is you have money to invest in growth (whether from revenue or from a VC). For any complex architecture there is usually a dramatically simpler one that could still move the essential bits around, it just might not support the same number of engineers delineated into different teams with narrower responsibilities.
The general headcount growth and architecture trajectory is therefore governed by business success. When we’re growing we hire and we create complex architecture to chase growth in as many directions as possible. Eventually when growth slows we have a system that is so complex it requires a lot of people just to understand and maintain—even if the headcount is longer justified those with power in the human structure will bend over backwards to justify themselves. This is where the playbook changes and a private equity (or Elon) mentality is applied to just ruthlessly cut and force the rest of the people how to keep the lights on.
I consider advances in AI and productivity orthogonal to all this. It will affect how people do their jobs, what is possible, and the economics of that activity, but the fundamental dynamics of scale and architectural complexity will remain. They’ll still hire more people to grow and look for ways to apply them.
The question is why. You mention microservices. I'm not convinced.
Many think it is "horizontals". Possible, these taxes add up it is true.
Perhaps it is cultural? Perhaps it has to do with the workforce in some manner. I don't know and AFAIK it has not been rigorously studied.
I'll never forget the sama AGI posts before o3 launched and the subsequent doomer posting from techies. Feels so stupid in hindsight.
From a person who is responsible for delivering projects, I’ve never thought “it sure would be nice if I had a few junior devs”. Why when I can poach an underpaid mid level developer for 20% more?
Turns out some people suck, but most of them don’t suck.
Unlike AI, which gives me fake methods, broken code, and wrong advice with full confidence.
Yes it was just as well structured as I - someone who has been coding as a hobby or professionally for four decades - would have done.
When I ask for additional headcount, I’m looking at the next quarter since that’s what my manager is judging me based on.
I’m also a great teacher. That’s my $DayJob and has been for the past decade first bringing in new to the company processes and technologies, leading initiatives, teaching other developers, working with sales, CxOs (smaller companies), directors, explaining large “organizational transformation” proposals etc. working at startups and then doing the same in cloud consulting first working at AWS (ProServe full time role) and now working as a staff architect full time at a third party consulting company.
But when I have been responsible for delivery, I only hire people who have experience “dealing with ambiguity” and show that I can give them a decently complicated problem and they can take the ball and run with it and make decent decisions and do research. I don’t even do coding interviews - when I interview it’s strictly behavioral and talking through their past projects, decision making processes, how they overcame challenges etc.
In terms of AWS LPs, it’s “Taking Ownership” (yeah quoting Amazon LPs made me throw up a little).
My evaluations are based on quarterly goals and quarterly deliverables. No one at a corporation cares about anything above how it affects them.
Bringing junior developers up to speed just for them to jump ship within three years or less doesn’t benefit anyone at the corporate level. Sure they jump ship because of salary compression and inversion, where internet raises don’t correspond to market rates. Even first level managers don’t have a say so or budget to affect that.
This is true for even BigTech companies. A former intern I mentored who got a return offer a year before I left AWS just got promoted to an L5 and their comp package was 20% less than new hires coming in at an l5.
Everyone will be long gone from the company if not completely retired by the time that happens.
Especially with the amount of money that was put into just astroturfing the technology as more than it is.
You really want to believe, maybe even need to believe, that anyone who comes up with this idea in their head has never written a single line of code in their life.
It is on its face absurd. And yet I don't doubt for a second that Garman et al. have to fend off legions of hacks who froth at the mouth over this kind of thing.
I just won't use that information in quite the excitable, optimistic way they offer it.
"What does your availability over the next couple of weeks look like to chat about this opportunity?"
That sentiment ignores the magic of how well this works. There are mind blowing moments using AI coding, to pretend that it’s “just auto correct and tab complete” is just as deceiving as “you can vibe code complete programs”.
> "Measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs." -- Bill Gates
Do we reward the employee who has added the most weight? Do we celebrate when the AI has added a lot of weight?
At first, it seems like, no, we shouldn't, but actually, it depends. If a person or AI is adding a lot of weight, but it is really important weight, like the engines or the main structure of the plane, then yeah, even though it adds a lot of weight, it's still doing genuinely impressive work. A heavy airplane is more impressive than a light weight one (usually).
I completely understand your analogy and you are right. However just to nitpick, it is actually super important to have a weight on the airplane at the right place. You have to make sure that your aeroplane does not become tail heavy or it is not recoverable from a stall. Also a heavier aeroplane, within its gross weight, is actually safer as the safe manoeuverable speed increases with weight.
If someone adds more code to the wrong places for the sake of adding more code, the software may not be recoverable for future changes or from bugs. You also often need to add code in the right places for robustness.
Just to nitpick your nitpick, that’s only true up to a point, and the range of safe weights isn’t all that big really - max payload on most planes is a fraction of the empty weight. And planes can be overweight, reducing weight is a good thing and perhaps needed far more often than adding weight is needed. The point of the analogy was that over a certain weight, the plane doesn’t fly at all. If progress on a plane is safety, stability, or speed, we can measure those things directly. If weight distribution is important to those, that’s great we can measure weight and distribution in service of stability, but weight isn’t the primary thing we use.
Like with airplane weight, you absolutely need some code to get something done, and sometimes more is better. But is more better as a rule? Absolutely not.
The reason this is true is because at a higher weight, you'll stall at max deflection before you can put enough stress on the airframe to be a problem. That is to say, at a given speed a heavier airplane will fall out of the air [hyperbole, it will merely stall - significantly reduced lift] before it can rip the wings/elevator off [hyperbole - damage the airframe]. That makes it questionable whether heavier is safer - just changes the failure mode.
It’s an analogy that gets the job done and is targeted at non-tech managers.
It’s not perfect. Dead code has no “weight” unless you’re in a heavily storage-constrained environment. But 10,000 unnecessary rivets has an effect on the airplane everywhere, all the time.
Assuming it is truly dead and not executable (which someone would have to verify is & remains the case), dead code exerts a pressure on every human engineer who has to read (around) it, determine that it is still dead, etc. It also creates risk that it will be inadvertently activated and create e.g. security exposure.
But if your position is that the percentage of time in the software lifecycle that dead code has a negative effect on a system is anywhere close to the percentage of time in an aircraft lifecycle that extra non-functional rivets (or other unnecessary weight objects) has a negative effect on the aircraft, you’re just wrong.
I bet Google has a lot of tools to say convert a library from one language to another or generate a library based on an API spec. The 30% of code these LLMs are supposedly writing is probably in this camp, not net novel new features.
I ask an AI 4 times to write a method for me. After it keeps failing, I just write it myself. AI wrote 80% of the code!
I assumed it would happen at some point, but I am relieved that the change in sentiment has started before the bubble pops - maybe this will lesson the economic impact.
My boss said we were gonna fire a bunch of people “because AI” as part of some fluff PR to pretend we were actually leaders in AI. We tried that a bit, it was a total mess and we have no clue what we’re doing, I’ve been sent out to walk back our comments.
This is becoming unbreathable for hackers.
The hype train is going to keep on moving for a while yet though.
Pretty obvious conclusion that I think anyone who's thought seriously about this situation has already come to. However, I'm not optimistic that most companies will be able to keep themselves from doing this kind of thing, because I think it's become rather clear that it's incredibly difficult for most leadership in 2025 to prioritize long-term sustainability over short-term profitability.
That being said, internships/co-ops have been popular from companies that I'm familiar with for quite a while specifically to ensure that there are streams of potential future employees. I wonder if we'll see even more focus on internships in the future, to further skirt around the difficulties in hiring junior developers?
Better learn how to learn as we are not training(or is that paying) you to learn...
Finally, the c-suite is getting it.
Not to say that‘s what the AWS CEO is doing—maybe it is, maybe it isn’t, I haven’t checked—I’m just commenting on the general idea.
Pasting the quote for reference:
> Amazon Web Services CEO Matt Garman claims that in 2 years coding by humans won't really be a thing, and it will all be done by networks of AI's who are far smarter, cheaper, and more reliable than human coders.
Unless this guy speaks exclusively in riddles, this seems incredibly inconsistent.
If I had any interest in ever working for BigTech again (and I would rather get an anal probe daily with a cactus), I could relatively easily get into Google’s equivalent department as a “senior” based on my connections.
"It just means that each of us has to get more in tune with what our customers need and what the actual end thing is that we're going to try to go build, because that's going to be more and more of what the work is as opposed to sitting down and actually writing code...."
https://www.businessinsider.com/aws-ceo-developers-stop-codi...
If you read the full remarks they're consistent with what he says here. He says "writing code" may be a skill that's less useful, which is why it's important to hire junior devs and teach them how to learn so they learn the skills that are useful.
You need people who can validate LLM-generated code. It takes people with testing and architecture expertise to do so. You only get those things by having humans get expertise through experience.
I’m not even sure AI is good for any engineer, let alone junior engineers. Software engineering at any level is a journey of discovery and learning. Any time I use it I can hear my algebra teacher telling me not to use a calculator or I won’t learn anything.
But overall I’m starting to feel like AI is simply the natural culmination of US economic policy for the last 45 years: short term gains for the top 1% at the expense of a healthy business and the economy in the long term for the rest of us. Jack Welch would be so proud.
There are lots of personal projects that I have wanted to build for years but have pushed off because the “getting started cost” is too high, I get frustrated and annoyed and don’t get far before giving up. Being able to get the tedious crap out of the way lowers the barrier to entry and I can actually do the real project, and get it past some finish line.
Am I learning as much as I would had I powered through it without AI assistance? Probably not, but I am definitely learning more than I would if I had simply not finished (or even started) the project at all.
> (…)
> I’m not even sure AI is good for any engineer
In that case I’m not sure you really agree with this CEO, who is all-in on the idea of LLMs for coding, going so far as to proudly say 80% of engineers at AWS use it and that that number will only rise. Listen to the interview, you don’t even need ten minutes.
As someone who works in AI, any CEO who says that AI is going to replace junior workers has no f*cking clue what they are talking about.
Can SOME people's jobs be replaced by AI. Maybe on paper. But there are tons of tradeoffs to START with that approach and assume fidelity of outcome.
I don't mean that as a negative, he's doing great work explaining AI to (dev) masses!
Undergraduate -> Graduate Student -> Post-doc -> Tenure/Senior
Some exceptions occur for people getting Tenure without post doc or people doing some other things like taking undergraduate in one or two years. But no one expect that we for whole skip the first two and then get any senior researchers.
The same idea applies anywhere, the rule is that if you don't have juniors then you don't get seniors so better prepare your bot to do everything.
On a side note.. ya’ll must be prompt wizards if you can actually use the LLM code.
I use it for debugging sometimes to get an idea, or a quick sketch up of an UI.
As for actual code.. the code it writes is a huge mess of spaghetti code, overly verbose, with serious performance and security risks, and complete misunderstanding of pretty much every design pattern I give it..
But I have not yet been able to consistently get value out of vibe coding. It's great for one-off tasks. I use it to create matplotlib charts just by telling it what I want and showing it the schema of the data I have. It nails that about 90% of the time. I have it spit out close-ended shell scripts, like recently I had it write me a small CLI tool to organize my Raw photos into a directory structure I want by reading the EXIF data and sorting the images accordingly. It's great for this stuff.
But anything bigger it seems to do useless crap. Creates data models that already exist in the project. Makes unrelated changes. Hallucinates API functions that don't exist. It's just not worth it to me to have to check its work. By the time I've done that, I could have written it myself, and writing the code is usually the most pleasurable part of the job to me.
I think the way I'm finding LLMs to be useful is that they are a brilliant interface to query with, but I have not yet seen any use cases I like where the output is saved, directly incorporated into work, or presented to another human that did not do the prompting.
Just yesterday I uploaded a few files of my code (each about 3000+ lines) into a gpt5 project and asked in assistance in changing a lot of database calls into a caching system, and it proceeded to create a full 500 line file with all the caching objects and functions I needed. Then we went section through section of the main 3000+ line file to change parts of the database queries into the cached version. [I didn't even really need to do this, it basically detected everything I would need changing at once and gave me most of it, but I wanted to do it in smaller chunks so I was sure what was going on]
Could I have done this without AI? Sure.. but this was basically like having a second pair of eyes and validating what I'm doing. And saving me a bunch of time so I'm not writing everything from scratch. I have the base template of what I need then I can improve it from there.
All the code it wrote was perfectly clean.. and this is not a one off, I've been using it daily for the last year for everything. It almost completely replaces my need to have a junior developer helping me.
I've been assuming the people who are having issues are junior devs, who don't know the vocabulary well enough yet to steer these things in the right direction. I wouldn't say I'm a prompt wizard, but I do understand context and the surface area of the things I'm asking the llm to do.
I use aider and your description doesn't match my experience, even with a relatively bad-at-coding model (gpt-5). It does actually work and it does generate "good" code - it even matches the style of the existing code.
Prompting is very important, and in an existing code base the success rate is immensely higher if you can hint at a specific implementation - i.e. something a senior who is familiar with the codebase somewhat can do, but a junior may struggle with.
It's important to be clear eyed about where we are here. I think overall I am still faster doing things manually than iterating with aider on an existing code base, but the margin is not very much. It's only going to get better.
Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.
LLMs are actually -the worst- at doing very specific repetitive things. It'd be much more appropriate for one to replace the CEO (the generalist) rather than junior staff.
I remember someone that had a .sig that I loved (Can't remember where. If he's here, kudos!):
> I hate code, and want as little of it in my programs as possible.
> but the code is shitty, like that of a junior developer
> Does an intern cost $20/month? Because that’s what Cursor.ai costs.
> Also: let’s stop kidding ourselves about how good our human first cuts really are.
https://fly.io/blog/youre-all-nuts/
So which one is it?
Is HN's favorite idiot "dumb" or is this CEO "nuts"?
Please HN, I NEED YOU TELL ME IF HOW STUPID I AM!
ramesh31•2h ago
blackhaz•2h ago
At least in my personal case, struggling with renewal at Virgin Broadband, multiple humans wasted probably an hour of everyone's time overall on the phone bouncing me around departments, unable to comprehend my request, trying to upsell and pitch irrelevant services, applying contextually inappropriate talking scripts while never approaching what I was asking them in the first place. Giving up on those brainless meat bags and engaging with their chat bot, I was able to resolve what I needed in 10 minutes.
Xunjin•2h ago
wkat4242•2h ago
Xunjin•1h ago
https://www.reddit.com/r/callcentres/comments/1iiqbxh/the_re...
Abuses done by customers: https://www.bbc.com/news/business-59577351
kamaal•2h ago
In India most of the banks now have apps that do nearly all the banking you can do by visiting a branch personally. To that extent this future is already here.
When I had to close my loan and had to visit a branch nearly a few times, the manager tells me, significant portion of his people's time now goes into actual banking- which according to him was selling products(fixed deposits, insurances, credit cards) and not customer support(which the bank thinks is not its job and has to because there is no other alternative to it currently).
supriyo-biswas•2h ago
firesteelrain•2h ago
In IT, if at a minimum, AI would triage the problem intelligently (and not sound like a bot while doing it), that would save my more expensive engineers a lot more time.
sharperguy•2h ago
ninetyninenine•1h ago
Using AI isn’t rocket science. Like you’re talking about using AI as if typing a prompt in English is some kind of hard to learn skill. Do you know English? Check. Can you give instructions? Check. Can you clarify instructions? Check.
LinuxAmbulance•1h ago
Because junior engineers have no problem with wholeheartedly embracing AI - they don't have enough experience to know what doesn't work yet.
In my personal experience, engineers who have experience are much more hesitant to embrace AI and learn everything about it, because they've seen that there are no magic bullets out there. Or they're just set in their ways.
To management that's AI obsessed, they want those juniors over anyone that would say "Maybe AI isn't everything it's cracked up to be." And it really, really helps that junior engineers are the cheapest to hire.
SV_BubbleTime•1h ago
“You won’t lose your job to AI, you’ll lose it to someone who uses AI better than you do”
aksnsman•47m ago
I’m really tired of this trope. I’ve spent my whole career on “boring CRUD” and the number of relational db backed apps I’ve seen written by devs who’ve never heard of isolation levels is concerning (including myself for a time).
Coincidentally, as soon as these apps see any scale issues pop up.