Structural Engineering (generally construction engineering) does work like that. Following the analogy, the engineers draw; they don't lay bricks. But, all the best engineers have probably been site supervisors at some point and have watched brick being layed, and spoken to the layers of bricks, etc. Construction methods change, but they don't change as quickly as software engineering methods. There is also a very material and applicable "reality" constraint. Most struct's knowledge/heuristics remains valid over long periods of time. The software engineers' body of knowledge can change 52 times in a year. To completely stretch the analogy - the site conditions for construction engineering are better known than the site conditions for a large software project. In the latter case the site itself can be adjusted more easily, and more materially, by the engineering itself i.e. the ground can move under your feet. Site conditioning on steroids!
Ultimately, that's why I agree fully with the piece. Generic advise may be helpful, but it always applies to some generic site conditions that are less relevant in practice.
The problem is that doing it like that is much too expensive and too slow for most businesses.
A lot of this is because while a 'good' business is waiting for the 'good' software to be written, some crappy business has already written the crappy software and sold it to all the customers you were depending on. In general customers are very bad at knowing the difference between good and bad software and typically buy what looks flashy or the sales people bribe them the most for.
"Business" runs the same calculations. I'd posit that, as a practical matter, most businesses don't want "good" software; they want "good enough" software.
I don't view that as a failure of abstraction as a design principle as much as it is a pitfall of using the wrong abstraction. Using the right abstraction requires on the ground knowledge, and if nobody communicates that up the chain, well, you get the tree swing cartoon.
Nah, those changes are only in the surface, at the most shallow level.
There's always new techniques, materials and tools in structural engineering as well.
Foundations take a lifetime to change.
I'm fine with the disagreement if you say I'm correct. ¯\_(ツ)_/¯
> any "new techniques, materials and tools" need to be communicated to the brick layers
Same for software.
> That takes time and effort i.e. it all needs to be actively managed. The brick layers have to be able to work with those new techniques and materials.
Same for software.
> I don't want some of them using method #1 over here, and method #2 over there, unless I'm wholly conversant with the methods, and fully confident that it'll all mesh eventually. The system i.e. the whole shebang has to work coherently to serve its purpose.
Same for software.
Virtually every profession has a body of knowledge that's constantly getting updated. Only software engineers seem to have this faulty assumption that they must apply it all immediately. Acknowledging it's a false assumption leads to a better life.
The challenge would be to control the pace of the evolution of the body of knowledge, but more importantly, its application, to a pace that's consistent with the pace of the system you're building.
> faulty assumption that they must apply it all immediately
No truer word was ever said. Everyone is attracted to shiny things.
Very strongly disagree.
There are limitless methods of solving problems with software (due to very few physical constraints) and there are an enormous number of different measures of whether it's "good" or "bad".
It's both the blessing and curse of software.
If you dig deeper you'll realize that it's possible to categorize techniques, tools, libraries, algorithms, recipes, whatever.
And if you dig even deeper, you'll realize that there is foundational knowledge that lets you understand a lot of things that people complain about being too new.
The biggest curse of software is people saying "no" to education and knowledge.
Can you provide concrete examples of the things that you think are foundational in software? I'm thinking beyond "be organized so it's easier for someone to understand", which applies to just about everything we do (e.g. modularity, naming, etc.)
For every different approach like OOP, functional, relation DB, object DB, enterprise service bus + canonical documents, microservices, cloud, on prem, etc. etc., they are just options with pros and cons.
With each approach the set of trade-offs is dependent on the context that the approach is applied into, it's not an absolute set of trade-offs, it's relative.
A critical skill that takes a long time to develop is to see the problem space and do a reasonably good job of identifying how the different approaches fit in with the systems and organizational context.
Here's a real example:
A project required a bunch of new configuration capabilities to be added to a couple systems using the normal configuration approach found in ERP systems (e.g. flags and codes attached to entities in the system controlling functional flow and data resolution, etc.). But for some of them a more flexible "if then" type capability made sense when analyzing the types of situations the business would encounter in these areas. For these areas, the naive/simple approach would have been possible but would have been fragile and difficult to explain to the business how to get the different configurations in different places to come together to produce the desired result.
There is no simple rule you can train someone on to spot when this is the right approach and when it is not. It's heavily dependent on the business context and takes experience.
Are you really expecting an answer here? I'll answer anyway.
• A big chunk of the CompSci curriculum is foundational.
• Making wrong states unrepresentable either via type systems or via code itself, using invariants, pre/post-conditions, etc. This applies to pretty much every tool or every language you can use.
• Error handling is a topic that goes beyond tools and languages, and even beyond whether you use try/catch, algebraic objects or values. It seeps into logging and observability too.
• Reasoning about time/space and tradeoffs of algorithms and structures, knowing what can and can't be computed, parsed, or recognized at all. Knowing why some problems don’t scale and others do.
• Good modeling of change, including ordering: immutability vs mutation, idempotency, retry logic, concurrency. How to make implicit timing explicit. Knowing which choices are cheap to undo and which are expensive, and design for those.
• Clear ownership of responsibilities and data between parts of the system via design of APIs, interfaces and contracts. This applies to OOP, FP, micro-services, modules and classes, and even to how one deals with third party-services beyond the basic.
• Computer basics (some of which goes back to 60s/70s or even back): processes, threads, green memory, scheduling, cache, instructions, memory hierarchy, threads, but races, deadlock, and ordering.
• Information theory (a lot goes to Claude Shannon, and back): compression, entropy, noise. And logic, sets, relations, proofs.
I never said there is a "simple rule" only foundational topics, but I'll say again: The biggest curse of software is people saying "no" to education and knowledge.
Yes, and thanks for the examples, it's now clear what you were referring to. I agree that most of those are generally good fundamentals (e.g. wrong states, error handling, time+space), but some are already in complex territory like mutability. Even though we can see the problem, we have a massive amount of OOP systems with state all over the place. So the application of a principle like that is very far from settled or easy to have a set of rules to guide SE's.
> The software engineers' body of knowledge can change 52 times in a year
Nah, those changes are only in the surface, at the most shallow level.
I think the types of items you listed above are the shallow layer. The body of knowledge about how to implement software systems above that (the patterns and approaches) is enormous and growing. It's a large collection of approaches each with some strengths and weaknesses but no clear cut rule for application other than significant experience.
They are not, by definition. You provided proof for it yourself: you mention the "body of knowledge [...] above that", so they really aren't the topmost layer.
> is enormous and growing
That's why you learn the fundamentals. So you can understand the refinements and applications of them at first glance.
I said "shallow", not "topmost".
> That's why you learn the fundamentals. So you can understand the refinements and applications of them at first glance.
Can you explain when (if ever) a person should use an OOP approach and when (if ever) he/she should use a functional approach to implement a system?
I don't think those fundamentals listed above help answer questions like that and those questions are exactly what the industry has not really figured out yet. We can see both pros and cons to all of the different approaches but we don't have a body of knowledge that can point to concrete evidence that one approach is preferred over the many other approaches.
> Can you explain when (if ever) a person should use an OOP approach and when (if ever) he/she should use a functional approach to implement a system?
I can, and have done several times, actually, for different systems.
> I don't think those fundamentals listed above help me
The list I gave was not exhaustive. You asked yourself for "concrete examples" and I gave examples.
The reason I can't answer hard questions in a simple message is exactly because those foundations are not "shallow" at all.
The reason I asked that question isn't to be argumentative, it's because, IMO, the answer to those types of questions are exactly what does not exist in the software engineering world.
And talking through the details of our different opinions is how we can understand where each one is coming from and possibly, maybe, incorporate some new information or new way of looking at things into our mental models of the world.
So, if you do think you have an answer, I am truly interested in when you think OOP is appropriate and when functional is better suited (or neither).
If someone asked me that question, I would say "If we're in fantasy land and it's the first system ever built and there are no variables related to existing systems and supportability and resource knowledge, etc., then I really can't answer the question. I've never built a system that was significantly functional, I've only built procedural, OOP and mixtures of those two with sprinklings of functional. I know there are significant pros to functional, but without actually building a complete system at least once, I can't really compare"
I can answer that, and did in the past, as I have done projects in both OOP and FP. But before I answer, I ask follow-up question about the system itself, and I will be giving lots of "it depends" and conditions.
There is no quick and dirty rule that will apply to any situation, and it's definitely not something I can teach in a message board.
I understand I’m replying against the spirit of your point, but the IEEE has actually published one and it seems to get updated very slowly.
https://www.computer.org/education/bodies-of-knowledge/softw...
Imagine if you worked for an online retailer like Amazon, and you were assigned to architect a change so you can add free sample items into customers' orders. Take a moment to think about how you'd architect such a system, and what requirements you'd anticipate fulfilling. In the next paragraph, I'll tell you what the requirements are. Or you can skip the next paragraph, the size of which should tell you the requirements are more complex than they seem.
The samples must be items in the basket, so the warehouse knows to pick them. They must be added at the moment of checkout, because that's when the order contents and weight can change. Often a customer should receive a sample only once, even if they check out multiple orders - so a record should be kept of which customers have already been allocated a given sample. It should be possible to assign a customer the same sample multiple times, in which case they should receive it once per order until they've received the assigned number. Some samples go out of stock regularly, so the sample items should not be visible to the customer when they view their order on the website, but if shipped it should appear on their receipt to assure them they haven't been charged for it. Samples should never be charged for, even if their barcode is identical to something we normally charge for. If the warehouse is unable to ship the sample, the customer should not receive a missing-item apology or a separate shipment, and the record saying that customer has had that sample already should be decremented. If the warehouse can't ship anything except the sample, the entire orders should be delayed/cancelled, never shipping the sample alone. If a customer ordered three of an item and was assigned one sample item with the same barcode but the warehouse only had three items with that barcode in stock, something sensible should happen. One key type of 'sample' is first-time-customer gifts; internal documentation should explain that if the first order a customer places is on 14-day delivery and their second order is on faster delivery and arrives first, the first-order gift will be in the second order to arrive but that's expected because it's assigned at checkout. If the first-order-checked-out is cancelled, either by the customer or the warehouse, the new-customer gift should be added to the next order they check out. Some customers will want to opt out of free samples, those who do should not be assigned any samples. But the free sample system is also used by customer services to give out token apology gifts to customers whose orders have had problems, customers who've been promised a gift should receive it even if they've opted out of free samples.
No reasonable person can design such a system upfront, because things like 'opt-out mechanism sometimes shouldn't opt you out' and 'more than one definition of a customer's first order' do not occur to reasonable people.
This thought processes does use some knowledge of online retail but not really that much. It's mostly patterns of system decomposition and good engineering.
Edit: the point of the article itself stands, if the codebase is in no shape to have these free samples built as I described then my input is useless, other than to consider working toward that architectural goal.
When you have done this many times you absolutely can design a large application without touching the code. This is part planning and risk analysis experience and part architecture experience. You absolutely need a lot of experience creating large applications multiple times and going through that organizational grind but prior experience in management and writing high level plans is extremely helpful.
When it comes to extending an existing application it really comes down to how well the base application was planned to begin with. In most cases the base application developers have no idea, because they either outsourced the planning to some external artifact or simply pushed through it one line at a time and never looked back. Either way the person writing the extension will be more concerned with the corresponding service data and accessibility than conformance to the base application code if it is not well documented and not well tested in a test automation scheme.
I've personally yet to have a situation where that comes up. And every application I've ever worked on has its architecture evolve over time, as behavior changes and new domain concepts are identified.
There are recurring patterns (one might even call them Design Patterns), but by the time we've internalized them we have even less need for up-front planning. Why write the doc when you can just implement the code?
But this is exactly the type of generic software design advice the article warns us about! And it mostly results in all all the bad software practices we as users know and love remaining unchanged (consistently "bad" is better than being good at least in some areas!)
What you describe just sounds "inconsistent AND bad".
That’s a logic error. The claim was that "inconsistent but good" can exist, not that "inconsistent == good". Responding with one example where "inconsistent" turned out badly is a totally different claim and doesn't refute what GP says.
I actually agree that half-assing a problem is not the best solution.
It's just that they are not examples of "inconsistent but good". They are not even "good", just "inconsistent". You said yourself that they're worse overall.
Nobody is denying that "inconsistent" can be bad on its own.
But you can't say that "inconsistent but good" is bad by providing an example of how "inconsistent and bad" is bad.
Sometimes there is a reason! Sometimes there isn't a reason, but it might be something we want to move everything over to if it works well and will rip out if it doesn't. Sometimes it's just someone who believes that functional programming is Objectively Better, and those are when an architect can say "nope, you don't get to be anti-social."
The best architects will identify some hairy problem that would benefit from those skills and get management to point the engineer in that direction instead.
A system that requires homogeneity to function is limited in the kinds of problems it can solve well. But that shouldn't be an excuse to ignore our coworkers (or the other teams: I've recently been seeing cowboy teams be an even bigger problem than cowboy coders.)
I've never seen consistency of libraries and even programming languages have a negative impact. Conversely, the situation you describe, or even going out of the way to use $next_lang entirely, is almost always a bad idea.
The consistency of where to place your braces is important within a given code base and teams working on it, but not that important across them, because each one is internally consistent. Conversely, two code bases and teams using two DBs that solve the same problem is likely not a good idea because now you have two types of DBs to maintain. Also, if one team solves a DB-specific problem, say, a performance issue, it might not be obvious how the other team might be able to pick up the results of that work and benefit from it.
So I don't know. I think the answer depends on how you define "consistency", which OP hasn't done very well.
There’s absolutely exceptions and nuances. But I think when weighing trade-offs, program makers by and large deeply under-weigh being consistent.
software that has "good" and "bad" parts in unpredictable
its true and false at the same time, it depends
here I can bring example: you have maintaining production system that has been run for years
there is flaw in some parts of codebase that is probably ignored either because
1. bad implementation/hacky way
2. the system outgrow the implementation
so you try to "fix" it but suddenly other internal tools stops working, customer contact the support because it change the behaviour on their end, some CI randomly fails etc
software isn't exist in a vacuum, complex interaction sometimes prevent "good" code to exist because that just reality
I don't like it either but this is just what it is
High quality and consistent > Low quality and consistent > Variable quality and inconsistent. If you're going to be the cause of the regression into variable quality and inconsistent you'd better deliver on bringing it back up to high quality and consistent. That's a lot of work that most people aren't cut out for because it's usually not a technical change but a cultural change that's needed. How did a codebase get into the state of being below standards? How are you going to prevent that from happening again? You are unlikely to Pull Request your way out of that situation.
Software that has only "bad" parts is also very unpredictable.
(Unless "bad" means something else than "bad", it's hard to keep up with the lingo)
your example is just bad code that unpredictable
My assertion is that software that has only bad parts is way more unpredictable than software that has both good and bad.
For multiple reasons: because "bad" is not necessarily internally consistent. Because it's buggy.
Unless, again, "bad" here means "objectively good quality but I get to call it bad because it's not in the way I like to write code".
I think adherence to “consistency is more important than ‘good design’” naturally leads to boiling the ocean refactoring and/or rewrites, which are far riskier endeavors with lower success rates than iterative refactoring of a working system over time.
migrate the rest of the codebase!
Then everyone benefits from the discovery.
If that's difficult, write or find tooling to make that possible.
It's in the "if it hurts, do it more often" school of software dev.
https://martinfowler.com/bliki/FrequencyReducesDifficulty.ht...
This is an example of a premature optimization. The reason it can still be good is that large refactors are an art that most people haven't suffered enough to master. There are patterns to make it tractable, but it's riskier and engineers often aren't personally invested in their codebases enough to bother over just fixing the few things that personally drive them nuts.
Consistency enables velocity. If there is consistency, devs can start to make assumptions. "Auth is here, database is there, this is how we handle ABC". Possible problems show up in reviews by being different to expectation. "Hey, where's XYZ?", "Why are you querying the database in the constructor?"
Onboarding between teams becomes a lot easier, ramp up time is smaller.
Without consistency, you end up with lots of small pockets of behavior that cause downstream problems for the org as a whole.
Every team needs extra staff to handle load peaks, resulting in a lot of idle devs.
Senior devs can't properly guess where the problematic parts of fixes or features would be. They don't need to know the details, just where things will be _difficult_.
Every feature requires coordination between the teams, with queuing and prioritizing until local staff become available.
Finally, consistency allows classes of bugs to be fixed once. Fix it once and migrate everyone to the new style.
If you see a massive 50 line if/else/if/else block that can be replaced with a couple calls to std::minmax, in code that you are working on, why not replace it?
But don't go trying to rewrite everything at once. Little improvements here or there whenever you touch the code. Look for the 'easy wins' which are obvious based on more modern approaches. Don't re-write already well-written code into a new form if it doesn't benefit anything.
Recently my boss said to me: "Customers want something that WORKS. If you deliver something, and it doesn't work, what's the customer going to think?" The huge drawback to putting a customer on the team is that the customer probably doesn't want to know, let alone be involved with, how the sausage is made. They want a turnkey solution unveiled to them on the delivery date, all ready to go, with no effort on their part.
Generally what you want is a customer proxy in that role, who knows or can articulate what the customer needs better than the customer themselves can. Steve Jobs was a fantastic example of someone who filled this role.
Often something that's easily brushed off by a support rep will ring a bell in the mind of a developer who has recently worked in the area of the code related to the issue.
Hmm, sounds familiar...
Bingo knows everyone's name-o
Papaya & MBS generate session tokens
Wingman checks if users are ready to take it to the next level
Galactus, the all-knowing aggregator, demands a time range stretching to the end of the universe
EKS is deprecated, Omega Star still doesn't support ISO timestamps
Number of softwares not supporting iso8601, TODAY (no pun), is appalling. For example, git (claiming compatibility, but isn’t).
Beginning in about the 1980s or so, with the rise of PCs and later the internet, the "genius programmer" was lionized and there was a lot of money to be made through programming alone. So systems analysts were slowly done away with and programmers filled that role. These days the systems analyst as a separate profession is, as you say, nearly extinct. The programmers who replaced the analysts applied techniques and philosophies from programming to business information analysis, and that's how we got situations like with Bingo, WNGMAN, and Galactus. Little if any business analysis was done, the program information flows do not mirror the business information flows, and chaos reigns.
In reality, 65% of the work should be in systems analysis and design—well before a single line of code is written. The actual programming takes up maybe 15% of the overall work. And with AI, you can get it down to maybe a tenth that: using Milt Bryce's PRIDE methodology for systems analysis and development will yield specs that are precise enough to serve as context that an LLM can use to generate the correct code with few errors or hallucinations.
Sometimes they were hired only to deliver specifications, sometimes the entire system. The software they delivered was quite stable, but that's beyond the point. There sure were software issues there, but I was impressed by how those problems were usually contained in their respective originating systems, rarely breaking other software. The entire process was clear enough and the interfaces between the fleet of windows/linux/mainframe programs were extremely well documented. Even the most disorganized and unprofessional third-party suppliers had an easier time writing software for us. It wasn't a joy, but it was rational, there was order. I'm not trying to romanticize the past, but, man, we sure un-learned a few things about how to build software systems
Now, later-stage in those companies, yes, part of the reason for the chaos is because nobody knows or cares to reconcile the big-picture, but there won't be economic pressure on that without major scaling-back of growth expectations. Which is arguably happening in some sectors now, though the AI wave is making other sectors even more frothy than ever at the same time in the "just try shit fast!" direction.
But while growth expectations are high, design-by-throwing-darts like "let's write a bunch of code to make it easy to AB test random changes that we have no theory about to try to gain a few percent" will often dominate the "careful planning" approach.
Bryce's Law: "We don't have enough time to do things right. Translation: We have plenty of time to do things wrong." Which was definitely true for YC startups, FAANGs, and the like in the ZIRP era, not so much now.
Systems development is a science, not an art. You can repeatably produce good systems by applying a proven, tested methodology. That methodology has existed since 1971 and it's called PRIDE.
> That flow works much better for "take existing business, with well defined flows, computerize it" than "people would probably get utility out of doing something like X,Y,Z, let's test some crap out."
The flows are the system. Systems development is no more concerned with computers or software than surgery is with scalpels. They are tools used to do a job. And PRIDE is suited to developing new systems as well as upgrading existing ones. The "let's test some crap out" method is exactly what PRIDE was developed to replace! As Milt Bryce put it: "do a superficial feasibility study, do some quick and dirty systems design, spend a lot of time in programming, install prematurely so you can irritate the users sooner, and then keep working on it till you get something accomplished." (https://www.youtube.com/watch?app=desktop&v=SoidPevZ7zs&t=47...) He also proved that PRIDE is more cost-effective!
The thing is, all Milt Bryce really did was apply some common sense and proven principles from the manufacturing world to systems development. The world settled upon mass production using interchangeable parts for a reason: it produces higher-quality goods cheaper. You would not fly in a plane with jet engines built in an ad-hoc fashion the way today's software is built. "We've got a wind tunnel, let's test some crap out and see what works, then once we have a functioning prototype, mount it on a plane that will fly hundreds of passengers." Why would a company trust an information system built in this way? It makes no sense. Jet engines are specced, designed, and built according to a rigorous repeatable procedure and so should our systems be. (https://www.modernanalyst.com/Resources/Articles/tabid/115/I...)
> Which is arguably happening in some sectors now, though the AI wave is making other sectors even more frothy than ever at the same time in the "just try shit fast!" direction.
I think the AI wave will make PRIDE more relevant, not less. Programmers who do not upskill into more of a systems analyst direction will find themselves out of a job. Remember, if you're building your systems correctly, programming is a mere translation step. It transforms human-readable specifications and requirements into instructions that can be executed by the computer. With LLMs, business managers and analysts will soon be able to express the inputs and outputs of a system or subsystem directly, in business language, and automatically get executable code! Who will need programmers then? Perhaps a very few, brilliant programmers will be necessary to develop new code that's outside the LLMs' purview, but most business systems can be assembled using common, standard tools and techniques.
Bryce's Law: "There are very few true artists in computer programming, most are just house painters."
The problem is, and always has been, that all of systems development has been gatekept by programmers for the past few decades. AI may be the thing that finally clears that logjam.
If you know what data you need, who needs it, and where it needs to go, you have most of your system designed. If you just raw dog it then stuff is all over the place and you need hacks on hacks on hacks to perform business functions, and then you have spaghetti code. And no, I don't think domain modeling solves it. It often doesn't acknowledge the real system need but rather views the data in an obtuse way.
Per Fred Brooks: "Show me your flowcharts, but keep your tables hidden, and I shall continue to be mystified. Show me your tables, and I won't need to see your flowcharts; they'll be obvious."
It's telling that PRIDE incorporates the concept of Information Resource Management, or meticulous tracking and documentation of every piece of data used in a system, what it means, and how it relates to other data. The concept of a "data dictionary" comes from PRIDE.
Sure you can definitely build things and figure out things along the way. But for any sufficiently complex project, it's unlikely to yield good results.
Anyone who has worked at a large company has encountered a Galactus, that was simply never redesigned into a simple unified service because doing so would sideline other work considered higher priority.
There are too many decisions, technical details, and active changes to have someone come in and give direction from on high at intervals.
Maybe at the beginning it could make sense sort of, but projects have to evolve and more often than not discover something important early on in the implementation or when adding "easy" features, and if someone is good at doing software design then you may need them even more at that point. But they may easily be detrimental if they are not closely involved and following the rest of the project details.
You don't need one until you've got 30-70 engineers, but a strong group of collaborative architects is the most important thing for keeping software development effective and efficient at the 30-1,000 engineer range.
I think this should also apply to people who come up with or choose the software development methodology for a project. Scrum masters just don't have the same skin in the game that lead engineers do.
In 30 years in software dev, I am yet to see any significant, detailed and consistent effort to be extended into design and architecture. Most architects do not design, do not architect.
Senior devs design and architect and then take their design to the architects for *feedback and approvals*.
These senior devs make designs for features and only account for code and systems they've been exposed to.
With an average employment term of 2 years most are exposed to a small cut of the system, which affects the depth and correctness of their design.
And architects mostly approve, sometimes I think without even reading the docs.
At most, you can expect the architects to give generic advice and throw a few buzzwords.
At large, they feel comfortable and secure in their positions and mostly don't give a shit!
I've been thinking about this a lot. 2~3 years is a long time, long enough to have a pretty good grasp on what a code maintained by 50~100 does in pretty concrete terms, come up with decent improvement ideas, and see at least one or two structural ideas hit production.
If the person then stays 1 or 2 more years they get a chance to further refine, but usually will be moved up the ladder Peter Principle style. If they get a chance to lead these architecture changes that company has a chance to be on a decent path technally speaking.
I'm totally with you on the gist of it: architects will usually be a central switch arranging these ideas coming from more knowledgeable places. In the best terms I see their role as guaranteeing consistency and making sure teams don't impede each other's designs.
https://www.goodreads.com/en/book/show/39996759-a-philosophy...
Video overview at:
I feel it's already enough to rewrite a big part of subsystem or change the whole thing into shit (depends on maintainer).
Software today moves quite fast. 2 year is sometimes difference between a new company and a dead company
There were plenty of times where it would have been useful to have someone providing real architecture/design guidance, but no such person functionally existed.
Microsoft had a lot of sins, but at least they asked the coders to eat own dogfood.
Also the "2 year coding wizards" that you described usually dont live up to see the results (or rather: disasters) of their decisions + they dont have to maintain their own code.
The end.
On the other hand, there are Real Programmers [0] who will happily optimize the already-fast initializer, balk at changing business logic, and write code that, while optimal in some senses, is unnecessarily difficult for a newcomer (even an expert engineer) to understand. These systems have plenty of detail and are difficult to change, but the complexity is non-essential. This is not good engineering.
It's important to resist both extremes. Decision makers ultimately need both intimate knowledge of the details and the broader knowledge to put those details in context.
Modules with different requirements should not have single consistent codebase. Testing strategy, application architecture, even naming should be different across different modules.
Good software designers are facilitators. They don't tell people how to build software, but say "not like that" by making the technical requirements clear. They enable design to constantly change as the needs change.
It has been a long time since I've been at a company willing to actually employ someone in that roll. They require that their most senior engineers be focused on writing code themselves, at the expense of the team and skill-building necessary for quality software.
Instead we get bullshit like "team topologies" or frameworks that are more about how the company wants to manage teams than they are about how well the software works. We get "design documents" that are considered more important than working code. Even the senior engineers that are around aren't allowed to say "no" if it is going to interfere with some junior project manager's imagined deadline.
Software companies are penny-wise and pound foolish, resulting in shittastic spaghetti messes with microservice meatballs.
One does not need to be a programmer in order to be a great systems analyst/architect. Matter of fact it's the opposite: great analysts are good with people, and have a strong intuitive grasp of what people need in order to effectively run the business. Leaving that to programmers is a recipe for disaster, as without documentation of existing business systems and requirements and a solid design, programmers will happily build the wrong thing.
1. Value simple, effective systems
2. Understand all use cases, because they use it
3. Have enough freedom to fix small things as they find them
#3 is controversial sometimes, but I believe this flexibility and creative freedom for devs leads to much happier people and much better products.
It's not always like this, but the bigger the company, the more statistically probable it is.
skydhash•1mo ago
But yes, the map is not the territory, and giving directions is not the same as walking the trail. The actual implementation can deviate from the plan drafted at the beginning of the project. A good explanation is found in Naur's Theory of Programming, where he says the true knowledge of the system is inside the head of the engineers that worked on it. And that knowledge is not easily transferrable.
DerArzt•1mo ago