In my opinion having had multiple technical job roles (car stereo/alarm installer, website builder, military officer, CEO, CTO etc…) this is always the job.
The job is *always* to make the organization more effective and efficient full stop. Your role in that is what you choose and negotiate with your team throughout your life; boundaries change pre/during/post employment.
When you join a company you usually (not always) have a niche role, to fill in a gap that is preventing effective organizational execution.
If you’re mentally flexible to understand that your narrow focus is not the actual output, that is a temporal means to an output, then you transform how you view the concept of work and relationships
But I worked for a long time as a freelancer in big name corporates in the Netherlands and other large software companies. And that’s just how it is everywhere. Also in house senior devs have little to say. Hope other companies/countries are different
I know, I know... wow. Not much insightful. But for some reason with this particular engineer this is the starting point to talk about actual requirements. This question in particular triggers the conversation of going back to product to figure out what they truly know they want right now, not in a maybe future. What are the actual hard, known requirements, instead of wishful thinking and "if everything goes well we will need this" type of mentality of ~hopeful~ optimistic PMs.
When people argue about the rigor of software engineering (i.e. "is it really engineering?") they often forget an important part: we are doomed to repeat mistakes or reinvent the wheel because nobody ever reads the existing research, nobody learns from the past, we're always blogging about the trendy latest thing withour asking ourselves "maybe someone in the 70s already explored this and drew valuable lessons?".
We do. There’s dozens of us!
Here’s the thing though, the number of programmers in the world has been doubling every 5 years or so for the past few decades. If you have been doing this for 5 years, you have more experience than half the industry. It’s a young field and information only propagates so fast. You too have the power to help it spread.
- If we do it <this way>, will <this requirement> be impossible to implement (without a huge rewrite cost) later? If so then, if <this requirement> is a realistic possibility, consider <this way> is probably a poor choice
- If we do it <this way>, will <this requirement> be harder to implement, but not terribly so? If so, then weigh the cost/probability of doing it and it not being needed vs not doing it and it being needed
- If we do it <this way>, will <this requirement> not be any more cost to add then if we did it now? If so, then don't do it now. Because you're risking unneeded time for no benefit
Admittedly, all three of those are the same "equation", just the three ranges of where the numbers stand. But it's nice to specifically ask the first and third questions... because they can cut short the analysis.
It's common for junior engineers to want to over-engineer stuff: they want to pull this cool library, they want to try this nice pattern, and over all they want to make a good job and a complex architecture sounds like they put more effort into it than a two-liners. That's why junior engineers are not the team lead.
As the lead, many times it's difficult to prove why it's over-engineering. You can often only say "hmm what you suggest is pretty complicated, requires a lot of effort, and in this case I don't think it's worth it".
What makes your take more valuable than the junior engineer's take? Experience.
Now don't get me wrong: it does not mean AT ALL that juniors don't bring anything valuable. They often bring great ideas. But their lack of experience means that sometimes it's harder for them to understand why it's "too much". The lead should listen to them, understand what they say (and by that I mean that they should prove to the junior that they understand), and then refuse the idea.
If a junior feels like the lead is incompetent (i.e. does not understand their ideas and selects inferior solutions instead), then the team is in trouble. And in a way, it is the lead's responsibility.
Architectural decisions sometimes close doors and making future changes very difficult.
The only thing we know for certain is that change will happen.
In fact, it’s crossing my mind that people might not want to be accused of being lazy, and that is a motivation to over-engineer solutions.
Software breaks when data transforms in a way that typing can't solve. When data goes across a wire, or into a database, it leaves your space. Anything you do to your code risks breaking it. Integration tests solve that, but at a very high cost.
I don't have a great solution for that. It just comes down to experience: how do things change over time? You take guesses. You try to be flexible, but not so flexible that you aren't solving the problem at hand. (It doesn't do you any good to hand the user a C compiler and say "this is flexible enough to handle all of your future needs.")
Experience is, unfortunately, the worst teacher. It gives the lesson after it gives the test.
We've come full circle with coworkers telling me that "the best thing" about LLMs is that they can tell you when you have a typo in your function invocation or you forgot a mandatory parameter. This always leaves me dumbfounded, mouth open. If only we had a system to prevent this, from before LLMs!
Do languages like Java have strong typing?
I thought so, but I can’t reconcile that with the belief that unit tests in Java would be unnecessary.
It's also static, so your types declarations will replace tests.
But it's extremely inexpressive, so you can declare very few properties, and so it will replace very few tests.
And it's inflexible, so it will get on your way all the time while you program.
Anyway, I can almost guarantee you the GP wasn't talking about Java.
Can you expand? Because my experience is they are totally orthogonal.
For me, unit testing is to ensure the function's algorithm is correct. You verify add(2, 3) == 5 and add(1, 2, 3) == 6 and add(2, Null) == Null.
But you don't generally write unit tests that tests how a function behaves when you pass an unexpected type. Nobody in my experience is testing add("a", FooObject) for a function only meant to take ints or floats, to make sure it only takes ints or floats.
So they solve entirely different problems: strong typing ensures a caller provides compatible data, while unit tests ensure a callee produces correct results (not just correctly typed results) from that data. You want both, ideally.
If it's a dynamically typed language and you want to be sure that your method throws an error on invalid types (rather than, say, treating the string "yes" as a boolean with a value of true), then unit tests are a good use for this.
I would argue that failing fast (for cases like that, where an input "could" be treated as the type you want, but almost certainly the caller is doing something wrong) is a positive thing.
Strong typing certainly doesn’t remove the need for testing, but it does change what type of issues you test for. And for certain classes of boring code that just transform or move data, a single integration test can give you all the assurance you need.
It is just making a trade off that is often a reasonable pragmatic default, but if taken as an absolute truth, can lead to brittle systems stuck in concrete.
Not all problems can be reduced to decision problems, which is what a trivial property is.
For me looking at how algebraic data types depend on a sum operation that uses either tagged unions or disjoint unions is a useful lens into the limits.
Note that you can use patterns like ports and adapters which may help avoid some of that brittleness, especially with anti-corruption layers in more complex systems/orgs or where you don’t have leverage to enforce contracts.
But yes if you can reduce your problems to decision problems where you have access to both T and F via syntactic or trivial semantics you should.
But when you convert pragmatic defaults to hard requirements you might run into the edges.
There are many ways that software can fail, and unit tests cover some of them. They don't remove the need for unit tests at all, but they do reduce the number of them needed (because you no longer need to test the things that strong typing handles).
Inheritance almost never works in "the real world" but I find being able to tie functions to the data they're expected to work on to be pretty helpful.
It's sort of like typing, really, functionX can only take FooBar variables vs making methodX on class FooBar.
Like everything else you can "do it wrong" and you shouldn't be a slave to any particular software ideology.
we always start with inheritance (Car is subtype of Vehicle; Cat is subtype of Animal).
we need to teach encapsulation as the primary use for OO.
ime, the most effective way of using "OO" in practice is that you define data classes for different entities and then affix a few fancy constructors that let you build entities out of other entities. inheritance rarely gets used.
IMHO sperating data formats and functions works decently enough, interface/protocol/duck typing are more elegant than OO classes.
As a real world image, a barcode scanner could be applied to anything that has a barcode, regardless of what that thing is. And I'd wager 99% of what we're trying to do fits that mold. When authentifying a user, the things that matter will be wether it's a legitimate call, and whether the user is valid. Forcing that logic I to classes or filtering by use type quicky becomes noise IMHO.
Inheritance works just fine in the real world. It's just not the only tool in the box, and many times other tools work better. But, especially when limited to shallow hierarchies, it's very useful.
What's the deal with this? I'm not an OO evangelist at all, but I often find myself using objects like you describe: as a mechanism to group related functions and data.
I feel there are people who see OO like a philosophy on how to architect stuff, and from that perspective, the idea of a "purely OO system" is perhaps a little unwieldy.
But from the perspective of OO as a low level tool to help group stuff in programming, as part of some other non-pure-OO system - it works really well and makes a lot of sense for me. I've often done this in environments around people who are outspoken anti-OO who either haven't noticed or haven't complained.
Am I a bad person, are you like me, are we idiots somehow?
There are tools, and we as professional are expected to use them when they make sense. That's all. If you use a tool badly, don't blame the tool.
> Remember you can still add in that complication tomorrow
is directly undermiend later with:
> When should you create stuff just in case? > ... > 1. There is a reasonable chance it will be useful later > 2. It will be difficult to add in later > 3. It won't meanginfully slow down the meeting of more likely requirements
whenever i've pushed to overengineer its because i've developed a strung hunch that points 1 and 2 are true and i'm being defensive about my time and effort next week.
and if you're not allowed to push back because of 1 and 2 it's a sign of some sort of organizational problems where product folks sit at the top of the hierarchy and hand down dictates to builders without consulting with builders as equal partners.
I think you can’t stress this point enough. In my experience anything that is not implemented by any norm or „clean“ or in an unusual way is considered a hack. Even if it perfectly solves the problem with the least amount of cruft to it. That makes me sad.
Obviously not every function should be -- many are so obvious and straightforward that there's nothing to test -- but every function that does anything vaguely "algorithmic" should be. Unit testing is really important for catching logic errors.
> Instead, write higher level tests close to the client/user facing behaviour that actually give you protection against breaking things unintentionally
Yes, these are good. But they're a different kind of test. There are tests for correctness, and tests that the program runs. You need both.
In fact, sometimes you even need to split up functions smaller than they otherwise would be, just so you can test an inner logic portion independently.
I've had this exact same discussion with people before. The same people that say "unit tests are worthless because the implementation could change, then the test gets thrown away". Honestly, it drives me bonkers because that entire argument makes no sense to me.
It's as banal as saying that when you change a function definition, you have to go change all the places that call it. What do you expect?
Why do you need both?
Some software is so small and simple that it's possible to write a high level running/integration test that covers all the practical correctness tests that application might need too.
You can say "yeah, but they'd be better if they had unit test" but that's the point being made: eventually you reach a place where more tests, even those recommended as best practice, don't actually deliver any more _real world_ value, and even make the code harder and slower to maintain.
Sure, you don't need unit tests if you don't need unit tests because a program really is that simple. But that's an exception for tiny programs/modules, not the rule.
> eventually you reach a place where more tests, even those recommended as best practice, don't actually deliver any more _real world_ value
I explicitly said unit tests are for algorithmic code that can have logic errors. Obviously, if you've written tests for all those, you don't need any more.
> and even make the code harder and slower to maintain.
But you can trust the code is correct. Obviously this is the tradeoff, and for anything serious it's the right tradeoff.
You seem to be arguing against tests generally except for the most superficial ones. That's a recipe for buggy and often hard-to-understand, underspecified code.
Yeah, no. Every time I saw code written by someone who attempted to avoid OOP it ended up with passing a huge 'context' parameter to most functions, effectively reinventing Python's OOP but worse.
Use pure functions as the starting point, but when you find yourself start passing complex structure around (any abstract word in parameter names, like 'context', 'data', 'fields' is a sign for that) just use OOP.
I think it can be all of these things, which in my opinion partially undermines the GP's point.
Recommended related musings: https://wiki.c2.com/?ClosuresAndObjectsAreEquivalent
The truth is that writing good code takes experience. Those who live by the rule "thou shalt not over-engineer" risk writing bad code. Those who live by the rule "thou shalt know all the patterns and use them" risk writing bad code.
You should strive to write code that others can understand and maintain, period. If you need to justify your lack of "something" ("It's not a hack because..." or "I don't use OOP because..." or "I duplicated this code because..."), then it feels like it says something about your opinion of your own code, IMHO.
It's perfectly ok to not use a software pattern if it's not useful. It's ok to duplicate code if you know it will likely diverge in the future. Small and simple is the way.
I feel this sort of opinion is simplistic. "Explaining" is a need that is sparked by both sides. Just because someone is having doubts or questioning your work that doesn't mean they are automatically right and you are automatically bounded to introduce changes. Sometimes you do get questions from people who don't even have context on the problem domain and why you are taking path A instead of path B.
Also, sometimes your choices can be questioned by opinionated peers who feel compelled to bikeshed over vague and subjective styles instead of objective technical issues. Is this something that should cause churn in your PRs? To give an example, once I had the displeasure of working with an opinionated junior developer who felt compelled to flag literally white spaces as critical problems in a PR because said junior developer instead of onboarding a source code formatter decided to write a personal markdown file with their opinions on style, and was trying to somehow force that as a reference. Is this sort of demand for justifications something you think should be accommodated?
I was more refering to the need to write an article explaining why you think it's (not) always a mistake to do X.
> Just because someone is having doubts or questioning your work that doesn't mean they are automatically right
Of course not. Some discussions are constructive. And ideally one learns to recognise them with experience.
Now if you find yourself in a debate about styling or whether object-oriented programming is fundamentally bad, my opinion is that this is not a debate worth having. If it blocks PRs, then there is a problem, and someone higher in the hierarchy needs to solve it (because if there is the problem in the first place, it means that the team cannot solve it themselves). This is what hierarchy is for.
mouse_•4mo ago
michalc•4mo ago
Have to admit I am curious: what’s the context / how has it helped you more specifically?
mouse_•4mo ago
/* 1-21. Write a program entab that replaces strings of blanks by the minimum number of tabs and blanks to achieve the same spacing. Use the same tab stops as for detab. When either a tab or a single blank would suffice to reach a tab stop, which should be given preference? */
When confronted with a problem like this, I begin to think, "Well what's the most robust way of going about this task? What's a simple, good, and useful rule that will accomplish the stated goal?" And I'm not quite sure what happens next - figuring that out might require some deeper introspection, but I end up with the proposed solution:
"Any time we encounter consecutive whitespace characters, including spaces, tabs and newlines, ignore the literal characters and instead simply add up exactly how many columns of whitespace they are going to take up, and then, find the smallest number of newlines, tabs and spaces we can print to the screen to match that amount of whitespace. This way we accomplish the stated goal and also end up with a nice text sanitizer."
I'm still mulling all of this over but I'm pretty confident this goes so far beyond the stated problem that it could be considered self sabotage. There's a lot of moving parts to my solution and I don't have the cognitive tools to break up a problem like that yet. (I'd like to eventually of course, but I have to stay focus on what I'm doing!)
Tangentially related maybe? Witches' Loaves https://www.littlefox.com/hk/supplement/org/C0002439