Sure, we’ve seen some pretty gnarly accidents, and there is no reasonable situation where risking death is a sane choice.
But ask yourself: is it the elevator's job to prevent an accident? If you think so, I suggest you never leave your home again, as safety is your own concern.
Like and subscribe for other posts like “knife handles? What an idiot” and “never wear a helmet you coward”.
Unironically the biggest flame-wars I ever saw on forums back in the day was on whether or not mandatory bike helmets made cycling safer or more dangerous.
However if I had to ride on a public road with cars zooming pass me recklessly I would absolutely wear a helmet on a bicycle.
Helmets are fine for sport riding, but inconvenient if you want to ride 5 minutes to the shops on a whim. And that kind of riding is usually less intense and safer, I presume, anyway. Football has helmets, walking doesn't.
>But before you run out of fingers and toes, you have created languages that contain dozens of keywords, hundreds of constraints, a tortuous syntax, and a reference manual that reads like a law book. Indeed, to become an expert in these languages, you must become a language lawyer (a term that was invented during the C++ era.)
And this was written before Swift gained bespoke syntax for async-await, actors, some SwiftUI crap, actor isolation, and maybe other things, honestly, I don't even bother to follow it anymore.
Pretty nifty. It's for cases where your type is a container of something (as opposed to anything).
I.E. you can .map (or .Select if you're .NET-inclined) over the Chars in a String, but not some other type, because String can't hold another type.
https://hackage.haskell.org/package/mono-traversable-1.0.21....
> [...] (a term that was invented during the C++ era.)
...like it's some sort of relic, or was in 2017
Kotlin's final-by-default is also just that - a default. In Java you can just declare your classes `final` to get the same behavior, and if you don't like final classes then go ahead and declare all of then open.
I also disagree with the author's claim that languages with many features requires you to be a "language lawyer", and that more simplistic languages are therefore better. It's of course a balance, and there are examples of languages like C++ and Haskell where the number of features become a clear distraction, but for simpler things like null types and final-by-default, the language is just helping you enforce the conventions that you would anyway need when working with a large code base. In dynamically typed languages you just have to be a "convention lawyer" instead, and you get no tool support.
I suppose it's all just a balance: simplicity versus expressiveness, foot guns versus inflexibility, conciseness versus ceremony, dev velocity versus performance in production.
I'm okay with shifting some of the burden of common errors from developer into the language if that improves reliability or maintainability. But Martin has a point in that no guard rails can ever prevent all bugs and it can be annoying if languages force lots of new ceremony that seems meaningless.
That's why learning more academic, 'non-practical' aspects of computer science is sometimes beneficial. Otherwise very few will naturally develop the abstract thinking that allows them to see uncaught exception and null pointer are exactly the same 'kind of bug.'
Anyway the author got it completely upside down. The stricter mental model of static typing came first (in more academic languages like Haskell and Ocaml). Then Java etc. half-assed them. Then we have Swift and Kotlin and whatever trying to un-half-ass them while keeping some terminology from Java etc. to not scare Java etc. programmers.
E.g. Java assumes List<Cat> is a subtype of List<Animal>. But this only holds when reading from the list. The correct behavior would be:
- `readonly List<Cat>` is a subtype of `List<Animal>`
- `writeonly List<Cat>` is a supertype of `List<Animal>`
- `readwrite List<Cat>` has no relationship with `List<Animal>`
But Java doesn't track whether a reference is readable or writable. The runtime makes every reference read-write, but the type checker assumes every reference is read-only.
This results in both
- incorrect programs passing the type checker, e.g. when you try to write an Animal to a List<Animal> (which, unbeknown to you, is actually a List<Cat>), you get a runtime exception
- correct programs not passing the type checker, e.g. passing a List<Animal> into an appendCat(List<Cat> output) function is a type error, even though it would be safe.
(Although all that is assuming you're actually following the Liskov substitution principle, or in other words, writing your custom subtypes to follow the subtyping laws that the type checker assumes. You could always override a method to throw UnsupportedOperationException, in which case the type checker is thrown out of the window.)
(Not saying Java's attempt to remedy C's problems wasn't half-assed — it was.) The trend to plug holes is primarily motivated by empirical evidence of bug classes. Not by elegance of academic research.
As Bjarne Stroustrup famously quipped:
> “There are only two kinds of languages: the ones people complain about and the ones nobody uses.”
Swift, Kotlin, Rust, C++ are attempt to become languages that everyone complains about, not Haskell or Ocaml.
1) For "Nullable Types", I see that it is VERY good to think about if some type can be null or not, or use a type system that does not allow nulls, so you need some "unit" type, and appropriately handle these scenarios. I think it is ok the language enforces this, it really, really helps you to avoid bugs and errors sooner.
2) For "Open/Sealed Classes", my experience says you never (or very rarely) know that a class will need to be extended later. I work with older systems. See, I don't care if you, the original coder, marked this class as "sealed", and it does not matter if you wrote tons of unit tests (like the author advocates), my customer wants (or needs) that I extend that class, so I will need to do a lot of language hacks to do it because you marked as sealed. So, IMHO, marking a class as "open" or "sealed" works for me as a hint only; it should not limit me.
First, C# proudly declares itself strongly-typed. After writing some code in Zig (a project just before this one, also undertaken as a learning opportunity, and not yet finished), I was confused. This is what is called strong-typed? C# felt more like Python to me after Zig (and Rust). Yes there are types. No, they are not very useful in limiting expression of absurdity or helping expression of intent.
Second, test. How do you write tests for a mod that depends on an undocumented 12 year old codebase plus of half a dozen of other mods? Short answer - it's infeasible. You can maybe extract some kind of core code from your mod and test that, but that doesn't help the glue code which is easily 50-80% in any given mod.
So what's left? I have great temptation to extract that core part and rewrite it in Zig. If Unity's C#-flavor FFI would work between linux and windows, if marshalling data would not kill performance outright, if it won't scare off potential contributors (and it will of course), if, if...
I guess I wanted to say that the tests are frequently overrated and not always possible. If language itself lends a hand, even as small and wimpy as C#'s, don't reject it as some sort of abomination.
Funnily enough, Uncle Bob himself evangelised and popularised the solution to this. Dependency Inversion. (Not to be confused with dependency injection or IOC containers or Spring or Guice!) Your call chains must flow from concrete to abstract. Concrete is: machinery, IO, DBs, other organisation's code. Abstract is what your product owners can talk about: users, taxes, business logic.
When you get DI wrong, you end up with long, stupid call-chains where each developer tries to be helpful and 'abstract' the underlying machinery:
UserController -> UserService -> UserRepository -> PostgresConnectionPoolFactory -> PostgresConnectionPool -> PostgresConnection
(Don't forget to double each of those up with file-names prefixed with I - for 'testing'* /s )Now when you simply want to call userService.isUserSubscriptionActive(user), of course anything below it can throw upward. Your business logic to check a user subscription now contains rules on what to do if a pooled connection is feeling a little flakey today. It's at this point that Uncle Bob 2017 says "I'm the developer, just let me ignore this error case".
What would Uncle Bob 2014 have said?
Pull the concrete/IO/dependency stuff up and out, and make it call the business logic:
UserController:
user? <- (UserRepository -> PostgresConnectionPoolFactory -> PostgresConnectionPool -> PostgresConnection)
// Can't find a user for whatever reason? return 404, or whatever your coding style dictates
result <- UserService.isUserSubscriptionActive(user)
return result
The first call should be highly-decorated with !? or whatever variant of checked-exception you're using. You should absolutely anticipate that a DB call or REST call can fail. It shouldn't be particularly much extra code, especially if you've generalised the code to 'get thing from the database', rather than writing it out anew for each new concern.The second call should not permit failure. You are running pure business logic on a business entity. Trivially covered by unit tests. If isUserSubscriptionActive does 'go wrong', fix the damn code, rather than decorating your coding mistake as a checked Exception. And if it really can't be fixed, you're in 'let it crash' territory anyway.
* I took a jab at testing, and now at least one of you's thinking: "Well how do I test UserService.isUserSubscriptionActive if I don't make an IUserRepository so I can mock it?" Look at the code above: UserService is passed a User directly - no dependency on UserRepository means no need for an IUserRepository.
andrewjf•4d ago
The authors thesis seems to be that it's preferable to rely on the programmer who wrote bugs to write even more bugs in tests in order to have some benefit over a compiler or type system that can prevent these things from happening in the first place?
So obviously it's an opinion and he's entitled to it, but (in my own opinion) it is so so so, on-its-face, just flat out wrong, I'm concerned that that it's creating developers who believe that writing so many tests (that languages and compilers save you time (and bugs) in writing) is a valid solution to preventing null pointer defeferences.
jddj•1h ago
ulrikrasmussen•1h ago
On top of that, every test that could have been omitted due to a type system incurs an extra maintenance tax that you have to pay when you change the API.
duesabati•53m ago