This approach is great when:
* program requirements are clear
* correctness is more important than prototyping speed, because every error has to be handled
* no need for concise stack trace, which would require additional layer above simple tuples
* language itself has a great support for binding and mapping values, e.g. first class monads or a bind operator
Good job by the author on acknowledging that this error handling approach is not a solver bullet and has tradeoffs.
Call it a thought experiment. We start with a clean implementation that satisfies requirements. It makes a bold assumption that every star in the universe will align to help us achieve to goal.
Now we add logging and error handling.
Despite my best intentions and years of experience, starting with clean code, the outcome was a complete mess.
It brings back memories when in 2006 I was implementing deep linking for Wikia. I started with a "true to the documention" implemention which was roughly 10 lines of code. After handling all edge cases and browser incompatibilites I ended up with a whooping 400 lines.
Doing exactly the same as the original lines did, but cross compatible.
If error handling and logging isn't necessary to satisfy requirements, why bother with them at all?
In my experience I've used exceptions for things that really should never fail, and optional for things that are more likely to.
For practitioners it serves mainly as a pointless gotcha. In safety critical domains the batteries that come with c++ are useless and so while they are right to observe this would be a major problem there they offer no real relief.
> The most common approach is the traditional try/catch method.
Exceptions, complete with try-catch-finally, were developed in the 60s & 70s, and languages such as Lisp and COBOL both adopted them.
So I'm not sure what you're calling "much later" as they fully predate C89, which is about as far back as most people consider when talking about programming languages.
- Program is broken. Probably need to abort program. Example: subscript out of range.
- Data from an external source is corrupted. Probably need to unwind transaction but program can continue. Example: bad UTF-8 string from input.
- Connection to external device or network reports a problem.
-- Retryable. Wait and try again a few times. Example: HTTP 5xx errors.
-- Non-retryable. Give up now. Example: HTTP 4xx errors.
Python 2 came close to that, but the hierarchy for Python 3 was worse. They tried; all errors are subclasses of a standard error hierarchy, but it doesn't break down well into what's retryable and what isn't.
Rust never got this right, even with Anyhow.
What to do with an error depends on who catches it. That's probably why Python got it wrong and then Rust said worse is better
If you are using exceptional handlers for transmitting errors instead of exceptions (i.e. what should have been a compiler error but wasn't detected until runtime), wrapping should be mandatory, else you'll invariably leak implementation details, which is a horrid place to end up. Especially if you don't have something like checked exceptions to warn you that the implementation has changed.
As a boring example, I might write something that detects when a resource gets hosted, e.g. goes from 404 -> 200.
The best I imagine you can do is be able to easily group each error and handle them appropriately.
See table here: https://github.com/cognitect-labs/anomalies
1. returning a tuple with an ok or fail value (so errors as values) plus
2. pattern matching on return values (which makes error values bearable) possibly using the with do end macro plus
3. failing on unmatched errors and trying again to execute the failed operation (fail fast) thanks to supervision trees.
Maybe that's because the latter feature is not available nearly for free in most runtimes and because Erlang style pattern matching is also uncommon.
The approach requires a language that's built on those concepts and not one in which they are added unnaturally as an afterthought (the approach becomes burdensome.)
Pattern matching: https://hexdocs.pm/elixir/pattern-matching.html
With: https://hexdocs.pm/elixir/1.18.1/Kernel.SpecialForms.html#wi...
Supervisors: https://hexdocs.pm/elixir/1.18.1/supervisor-and-application....
1. Stacktraces with fields/context besides a string 2. Wrapping errors 3. Combining multiple errors
> In JS world this could be true, but for Rust (and statically typed compiled languages in general) this is actually not the case… GO pointers are the only exceptions to this. There are no nil check protection at compile level. But Rust, kotlin etc are solid.
Yes it actually is the case. You cannot check/validate for every error, not even in rust. I recommend getting over it.
For a stupid-simple example: You can't even check if disk is going to be full!
The disk being full is a real error you have to deal with, and it could happen at any line in your code through no fault of your own, and no it doesn't always happen at write() but can also when you allocate pages for writing (e.g. SIGSEGV). You cannot really do anything about this with code- aborting or unwinding will only ever annoy users, but you can do something.
We live in a multitasking world, so our users can deal with out-of-disk and out-of-memory errors by deleting files, adding more storage, closing other (lower priority) processes, paging/swapping, and so on. So you can wait: maybe alert the user/operator that there is trouble but then wait for the trouble to clear.
Also: Dynamic-wind is a useful general-purpose programming technique awkward to emulate, and I personally dislike subclassing BackTrack from Error because of what can only be a lack of imagination.
That's a weird take. I've been working for multiple decades now with systems that have no UI to speak of; their end-users are barely aware that there's a whole system behind what they can see, and that's a good thing because they become aware of it when it causes them trouble.
I take from my mentor in programming this stance for many things, including error handling: the best solution to a problem is to avoid it. That's something everybody knows actually, but we can forget that when designing/programming because one has so many things to deal with and worry about. Making the thing barely work can be a challenge in itself.
For errors, this usually means: don't let them happen. E.g. avoid OOM by avoiding dynamic allocation as much as possible; statically pre-allocate everything, even if it means megabytes of unused reserved space. Don't design your serialization format with quotes around your keys just to allow "weird" key names, a feature that nobody will ever use and that creates opportunities for errors.
Of course it is not always possible, but don't miss the opportunity when it is.
I appreciate that, but...
> I've been working for multiple decades now with systems that have no UI to speak of; their end-users are barely aware that there's a whole system behind what they can see, and that's a good thing because they become aware of it when it causes them trouble.
Notice I said "user" not "end-user" or "customer".
This was not an accident.
In your system (as in mine) the "user" is the operator.
> the best solution to a problem is to avoid it.
That's your opinion man. I don't know if you can avoid everything (I certainly can't).
Something to consider is why Erlang people have been trying to get people to "let it crash" and just deal with that, because enumerating the solutions is sometimes easier than enumerating the problems.
Yes, if you can afford it, I would say it is a way to avoid the problem of handling errors in a bug-free way. But it is more than yet another error handling tactic, it is a design strategy.
Isn’t this addressed by preallocating data files in advance of writing application data? It’s pretty common practice for databases for both ensuring space and sometimes performance (by ensuring a contiguous extent allocation).
As an example, a disk block may be bad, requiring the OS to find another one to store that pre-allocated disk space. If you try to prevent that by writing to the preallocated space after you allocated it, you still can hit a case where the block goes bad after you did that.
Allocation isn't the only thing that can fail: Actually writing to the blocks can fail, and just because you can write zeros doesn't mean you can write anything else.
You really can't know until you try. This is life.
I think if you needed a better example of something you can't defend against in order to get the main idea, that's one thing, but I'm not giving advice in bad faith: Can you say the same?
fallocate() failing is exactly the same as write() failing from the perspective of the user, because the disk is still full, and the user/operator responds exactly the same way (by waiting for cleanup, deleting files, adding storage, etc).
Databases (the example given) actually do exactly as koolba suggests, and ostensibly for the reason of surfacing the error to the application. The point is what to do about the error itself though, not about whether fallocate() always works or is even possible.
That means good UX, intuitive interfaces, good affordances, user guidance (often, without requiring them to read text), and simplicity.
When an error is encountered, then it needs to be reported to the user in as empathetic and useful manner as possible. It also needs to be as “bare bones” simple as can reasonably be managed.
Designing for low error rates, starts from requirements. Good error reporting requires a lot of [early] input from non-technical stakeholders.
Lost packets, high latency, crashed disks, out of memory etc.
You can talk to your users sure but you need to handle this stuff at some level either way. Shit happens!
But we need to plan for it from Day One, and that can also include things like choosing good technology stacks.
Like I said, when inevitable errors happen, how we communicate (or, if possible, mitigate silently) the condition, is crucial.
[EDITED TO ADD] Note how any discussion of improving Quality of software is treated, hereabouts. Bit discouraging.
Often, there is disagreement over the definition of “bug.”
There’s the old joke, “That’s not a bug, it’s a feature!”, but I have frequently gotten “bug reports,” stating that the fact that my app doesn’t do something that the user wants, is a “bug.”
They are sometimes correct. The original requirements were bad. Even though the app does exactly what it says on the tin, it is unsuitable for its intended demographic.
I call that a “bug.”
Sure monads are cool and I’d be tempted to use them. They make it impossible for forget to check for errors and if you don’t care you can panic.
But JS is not Rust. And the default is obviously to use exceptions.
You’ll have to rewrap every API under the moon. So for Monads in JS to make sense you need a lot of weird code that’s awkward to write with exceptions to justify the costs.
I’m not sure the example of doing a retry in the API is “enough” to justify the cost. Also in the example, I’m not sure you should retry. Retries can be dangerous especially if you pile them on top of other retries: https://devblogs.microsoft.com/oldnewthing/20051107-20/?p=33...
It becomes very similar to try-catch exception handling at the place you draw the boundary, then within the boundary it’s monad land.
If you haven’t wrapped it in a monad, chances are you wouldn’t have wrapped it in a try-catch either!
I assert that try:catch encourages lazy error handling leading to a worse debugging experience and longer mean time to problem discovery.
Naked err returns can be a source of pain.
Funny enough, your code looks like it is inspired by Go, and Go experimented with adding stack traces automatically. Upon inspecting real-world usage, they discovered that nobody ever used the stack traces, and came to learn that good errors already contains everything you'd want to know, making stack traces redundant.
Whether or not you think the made the right choice, it is refreshing that the Go project applies the scientific method before making a choice. The cool part is that replication is the most important piece of the scientific method, so anyone who does think they got it wrong can demonstrate it!
Which is also why it draws so much ire. It speaks the truths developers don't like to admit.
They test with their community and get the biases of their community. Don't pretend it's more than that.
But it doesn't really matter which codebases they used, does it? Replication efforts will reveal anything they got wrong. No need to make guesses.
So you are meaning – with respect to the code – the same as all other software? What, then, is "enterprise" trying to add? It is not like you would write code that makes billions any differently than you would a code that makes a dollar.
Not at all. Fundamentally, you do need understanding in order to criticize. "Criticizing" without understanding is merely whining. If your intent is to whine, you are certainly welcome to until your heart's content, but it will be fruitless. Without you having an understanding – and being able to articulate it – progress cannot be made. This should be obvious.
> A stack trace is ground truth
But a costly truth. Even languages that do pass around stack traces are careful to avoid them except under special circumstances, which is kind of nonsensical from a theoretical point of view. If you find them to be useful, you'd find them useful in all cases. However, it is a necessary tradeoff for the sake of computing efficiency.
With a few small changes to your codebase you can restore the automatic attachment of stack traces like the original experiments had. Stack traces are made available for you to use! But, it remains that the research showed that the typical application didn't ever use it, so it wasn't a sensible default to include given the cost of inclusion. "But, but I wish it were!" doesn't change reality like you seem to think it does.
Understanding comes from all kinds of places. When a child touches a hot stove, they come to understand the consequences. That child doesn't gather 30 participants and record their reactions as they take turns burning their fingers. I'll leave you to extrapolate.
How I come across has no bearing on what is said. This is irrelevant and a pointless distraction.
> Understanding comes from all kinds of places.
If you have an understanding then you've studied it. All that is lacking, then, is the communication of what is understood. Strangely, after several comments, still crickets on the only thing that would add value to the discussion...
And using those utilities to test if an err is of a certain kind now that’s been wrapped a few times
It might be nice for JS to have a more generic sounding/seeming "do-notation"/"computation expression" language than async/await, but it is pretty powerful as-is, and kind of interesting seeing people talk about writing Monadic JS error handling and ignoring the "built-in" one that now exists.
This is also where I see it as a false dichotomy between Monads and try/catch. One is already a projection of the other in existing languages today (JS Promise, C# Task, Python Future/Task sometimes), and that's probably only going to get deeper and in more languages. (It's also why I think Go being intentionally "anti-Monadic" feels like such a throwback to bad older languages.)
However, many make the mistake to handle any errors at the wrong level. This leads to really buggy and hard to reason about code and in some cases really bad data inconsistency issues.
A rule of thumb is to never catch a specific error which you are not in a good position to handle correctly at that precise level of code. Just let them pass through.
will immediately throw if b == 0, because
a / b
is evaluated immediately, so execution never makes it into fromThrowable(). Does it need to be () => a / b
instead?Similarly, withRetry()'s argument needs to have type "() => ResultAsync<T, ApiError>" -- at present, it is passed a result, and if that result is a RateLimit error, it will just return the same error again 1s later.
Compared to try / catch with await, falling back to promises at least makes the error handling explicit for each request — more along the lines of what Go does without having to introduce a new pattern.
It go, use go style, in js use try/catch/finally.
Some junior engineer is going to stumble upon this and create a mess /bad week for the next poor soul who has to work with this.
teddyh•2mo ago
Well, IIUC, Java had (and still has) something called “checked exceptions”, but people have, by and large, elected to not use those kind of exceptions, since it makes the rest of the code balloon out with enormous lists of exceptions, each of which must be changed when some library at the bottom of the stack changes slightly.
remexre•2mo ago
Rust needs a bit more boilerplate to declare FooError, but the ? syntax automatically calling into(), and into() being free to rearrange errors it bubbles up really help a lot too.
The big problem with Java's checked exceptions was that you need to list all the exceptions on every function, every time.
esafak•2mo ago
https://blogs.oracle.com/javamagazine/post/java-sealed-class...
keybored•2mo ago
esafak•2mo ago
peterashford•2mo ago
keybored•2mo ago
In surprising twist: Java has ConcurrentModificationException. And, to counter its own culture of exception misuse, the docs have a stern reminder that this exception is supposed to be thrown when there are bugs. You are not supposed to use it to, I dunno, iterate over the collection and bail out (control flow) based on getting this exception.
eadmund•2mo ago
I hate checked exceptions too, but in fairness to them this specific problem can be handled by intermediate code throwing its own exceptions rather than allowing the lower-level ones to bubble up.
In Go (which uses error values instead) the pattern (if one doesn’t go all the way to defining a new error type) is typically to do:
which returns a new error which wraps the original one (and can be unwrapped to get it).A similar pattern could be used in languages with checked exceptions.
dullcrisp•2mo ago
Checked exceptions should indicate conditions that are expected to be handled by the caller. If a method is throwing a laundry list of checked exceptions then something went wrong in the design of that method’s interface.
akoboldfrying•2mo ago
Exactly. If Stream methods like filter() and map() could automatically "lift" the checked exceptions thrown by their callback parameters into their own exception specifications, it would solve one of the language's biggest pain points (namely: Streams and checked exceptions, pick one).
peterashford•2mo ago
Terr_•2mo ago
That's mostly developer laziness: They write a layer that calls the exception-throwing code, but they don't want to to think about how to model the problem in their own level of abstraction. "Leaking" them upwards by slapping on a "throws" clause is one of the lowest-effort reactions.
What ought to happen is that each layer has its own exception classes, capturing its own model for what kinds of things can go wrong and what kinds of distinctions are necessary. These would abstract-away the lower-level ones, but carrying them along as linked "causes" so that diagnostic detail isn't lost when it comes time for bug-reports.
Ex: If I'm writing a tool to try to analyze and recommend music that has to handle multiple different file types, I might catch an MP3 library's Mp3TagCorruptException and wrap it into my own FileFormatException.
kllrnohj•2mo ago
The problem with Java checked exceptions is they don't work well with interfaces, refactoring, or layering.
For interfaces you end up with stupid stuff like ByteArrayInputStream#reset claiming to throw an IOException, which it obviously never will. And then for refactoring & layering, it's typical that you want to either handle errors close to where they occurred or far from where they occured, but check exceptions forces all the middle stack frames that don't have an opinion to also be marked. It's verbose and false-positives a lot (in that you write a function, hit compile, then go "ah forgot to add <blah> to the list that gets forwarded along..." -> repeat)
It'd be better if it was the inverse, if anything, that exceptions are assumed to chain until a function is explicitly marked as an exception boundary.
Terr_•2mo ago
Syntactic sugar should make it easier to capture the decision after it's been made. For example, like replacing "throws InnerException" (perhaps a leaky abstraction) with something like "throws MyException around InnerException".
kllrnohj•2mo ago