in sync code ,the caller owns the stack ,so it makes sense they own the error. but async splits that. now each async function runs like a background job. that job should handle its own failure =retry ,fallback ,log because the caller usually cant do much anyway.
write async blocks like isolated tasks. contain errors inside unless the caller has a real decision to make. global error handler picks up the rest
the caller is itself a task / actor
the thing is that the caller might want to rollback what they're doing based on whether the subtask was rolled back... and so on, backtracking as far as needed
ideally all the side effects should be queued up and executed at the end only, after your caller has successfully heard back from all the subtasks
for example... don't commit DB transactions, send out emails or post transactions onto a blockchain until you know everything went through. Exceptions mean rollback, a lot of the time.
on the other hand, "after" hooks are supposed to happen after a task completes fully, and their failure shouldn't make the task rollback anything. For really frequent events, you might want to debounce, as happens for example with browser "scroll" event listeners, which can't preventDefault anymore unless you set them with {passive: false}!
PS: To keep things simple, consider using single-threaded applications. I especially like PHP, because it's not only single-threaded but it actually is shared-nothing. As soon as your request handling ends, the memory is released. Unlike Node.js you don't worry about leaking memory or secrets between requests. But whether you use PHP or Node.js you are essentially running on a single thread, and that means you can write code that is basically sequentially doing tasks one after the other. If you need to fan out and do a few things at a time, you can do it with Node.js's Promise.all(), while with PHP you kind of queue up a bunch of closures and then explicitly batch-execute with e.g. curl_multi_ methods. Either way ... you'll need to explicitly write your commit logic in the end, e.g. on PHP's "shutdown handler", and your database can help you isolate your transactions with COMMIT or ROLLBACK.
If you organize your entire code base around dispatching events instead of calling functions, as I did, then you can easily refactor it to do things like microservices at scale by using signed HTTPS requests as a transport (so you can isolate secrets, credentials, etc.) from the web server: https://github.com/Qbix/Platform/commit/a4885f1b94cab5d83aeb...
Any ASYNC operation, whether using coroutines or event based actors or whatever else should be modelled as a network call.
You need a handle that will contain information about the async call and will own the work it performs. You can have an API that explicitly says “I don’t care what happens to this thing just that it happens” and will crash on failure. Or you can handle its errors if there are any and importantly decide how to handle those errors.
Oh and failing to allocate/create that handle should be a breach of invariants and immediately crash.
That way you have all the control and flexibility and Async error handling becomes trivial, you can use whatever async pattern you want to manage async operations at that point as well.
And you also know you have fundamentally done something expensive in latency for the benefit of performance or access to information, because if it was cheap you would have just done it on the thread you are already using.
But what if you need to send emails AND record it in a DB?
> It does.
Well, what are they?
> You queue ALL of these side effects (simply tasks whose exceptions don't rollback your own task) until the end.
Yes, but what are the advantages of doing this?
> Then you can perform them all, in parallel if you wish.
I can do that without queuing them up first.
If any of the required subtasks fail, you don’t do the side effects. You ROLLBACK.
I'm afraid I am still not seeing the advantage here: the subtasks that can fail are IO almost exclusively. If the email is already sent when the DB update fails, that email can't be recalled.
Other than hash/signature verification, just what sort of subtasks did you have in mind that can fail and aren't IO?
Async subtasks typically ARE i/o, whether over a network or not.
The email shouldn't be sent if the DB update fails. The whole point is you wait until everything succeeds, before sending the email.
If your subtasks cause you to write some rows as if they succeeded, but subsequent subtasks fail, that is bad. You have to rollback your changes and not commit them.
If you charge a user even though they didn't get the thing they paid for, that's bad. Yes you can refund them after a dispute, but it's better not to have charged them in the first place.
The point is this: any subtasks that can cause your main task to fail should be processed BEFORE any subtasks that cannot cause your main task to fail.
A common sequence is "Send email, then update DB with the new count of emails sent". Doesn't matter which way you reorder them, there is no advantage to queuing those two tasks to go at the end because if the first success and the second fails you have still done half an atomic task.
> The point is this: any subtasks that can cause your main task to fail should be processed BEFORE any subtasks that cannot cause your main task to fail.
Do you have any examples that aren't IO? Because IO can always fail, and I am still wondering what sort of workflow (or sequence of tasks) you have in mind where you will see any advantage to queuing all the IO to run at the end.
If you have pure computation subtasks (such as checking a hash or signature), then sure, do the IO only after you have verified the check. Have you any idea how rare that workflow is other than for checksumming?
What workflow have you in mind, where we see a practical advantage from queuing IO to run at the end?
They were introduced in the Trio library [2] for Python, but they're now also supported by Python's built in asyncio module [3]. I believe the idea has spread to other languages too.
[1] https://vorpus.org/blog/notes-on-structured-concurrency-or-g...
[2] https://trio.readthedocs.io/en/stable/
[3] https://docs.python.org/3/library/asyncio-task.html#task-gro...
That would make it Beam's. Credit where due , , ,
rorylaitila•7mo ago
I always have global error handler that logs and alerts of anything uncaught. This allows me to code the happy path. Most of the time, it's not worth figuring out how to continuing processing under every possible error, so to fail and bail is my default approach. If I later determine that its something that can be handled to continue processing, then I update that code path to handle that case.
Most of my code is web applications, so that is where I'm coming from.
PaulHoule•7mo ago
In the async case you can pass the Exception as an object as opposed to throwing it but you're still left with the issue that the failure of one "task" in an asynchronous program can cause the failure of a supertask which is comprised of other tasks and handling that involves some thinking. That's chess whereas the stuff talked about in that article is Tic-Tac-Toe in comparison.
rorylaitila•7mo ago
But I can get away with this also because I don't write async heavy code. My web applications are thread-per-request (Java). This fits 99% of the needs of business code, whose processing nature is mostly synchronous.
PaulHoule•7mo ago
People used to worry about the 10k connection problem but machines are bigger now, few services are really that big, and fronting with nginx or something like that helps a lot. (That image sorter serves images with IIS)
JavaScript is async and you gotta live with it because of deployability. No fight with the App Store. No installshield engineer. No army of IT people to deploy updates. “Just works” on PC, Mac, Linux, tablet, game consoles, VR headsets, etc. Kinda sad people are making waitlist forms with frameworks that couldn’t handle the kind of knowledge graph editor and decision support applications I was writing in 2006 but that’s life.
bob1029•7mo ago
If you reach into the enterprise bucket of tricks, technologies like WCF/SOAP can propagate these across systems reliably. You can even forward the remote stack traces by turning on some scary flags in your app.config. When printing the final exception using .ToString(), this creates a really magical narrative of what the fuck happened.
The entire reason exceptions are good is because of stack traces. It is amazing to me how many developers do not understand that having a stack trace at the exact instant of a bad thing is like having undetectable wall hacks in a competitive CS:GO match.
rorylaitila•7mo ago
flysand7•7mo ago
This has been my biggest problem with exceptions, one, for the reason outlined above, plus it's for how much time you actually end up spending on figuring out what the exception for a certain situation is. "Oh you're making a database insertion, what's the error that's thrown if you get a constraint violation, I might want to handle that". And then it's all an adventure, because there's no way to know in advance. If the docs are good it's in the docs, otherwise "just try it" seems to be the way to do it.
rorylaitila•7mo ago
vips7L•7mo ago
9rx•7mo ago
Who doesn't understand that? If you aren't using exceptions you are using wrapping instead, and said wrapping is merely an alternative representation of what is ultimately the very same thing. This idea isn't lost on anyone, even if they don't use the call stack explicitly.
The benefit of wrapping over exceptions[1] is that each layer of the stack gains additional metadata to provide context around the whole execution. The tradeoff is that you need code at each layer in the stack to assign the metadata instead of being able to prepare the data structure all in one place at the point of instantiation.
[1] Technically you could wrap exceptions in exceptions, of course. This binary statement isn't quite right, but as exceptions have proven to be useless if you find yourself ending up here, with two stacks offering the same information, we will assume for the sake of discussion that the division is binary.
groestl•7mo ago
9rx•7mo ago
Checked exceptions were introduced to try to help with that problem, giving you at least a compiler error if an implementation changed from underneath you. But that comes with its own set of problems and at this point most consider it to be a bad idea.
Of course, many just throw caution to the wind and don't consider the future, believing they'll have moved on by then and it will be the next programmer's problem. Given the context of discussion, we have assumed that is the case.
HdS84•7mo ago
PaulKeeble•7mo ago
Having used Go for years now frankly I prefer exceptions, way too often there is nothing that can be done about an error locally but it produces noise and if branches all over the code base and its even worse to add an error later to a method than in Java because every method has to have code added not just a signature change. I really miss stack traces and the current state of the art in Go has us writing code to produce them in every method.
bigstrat2003•7mo ago
vips7L•7mo ago
They’re really painful with lambdas and you need to do weird things to get them to work properly; like rethrowing and catching some unchecked type. Scala has some interesting research here and describe the problem well [0].
Some other things I think that would go a long way to making checked exceptions more usable would be making try as an expression like in Scala or Kotlin. Not being an expression makes for some really awkward code or giant try blocks where you can’t tell what actually errors.
Finally we really need a way to “uncheck” them unceremoniously. This is one of the largest reasons developers have rejected them. If you can’t possibly handle something you need to write at least 5-6 lines of code to wrap and throw in a runtime exception or you see developers checking things that they can’t handle and then their callers who also can’t handle those exceptions have to deal with the ceremony of unchecking. I’d really love for some `throws unchecked` or try! syntax that would just automatically turn something into a runtime exception:
This all of course is probably a pipe dream, the OpenJdk team seems to be indefinitely stuck pouring all resources into Valhalla.[0] https://docs.scala-lang.org/scala3/reference/experimental/ca...
delifue•7mo ago
But often the problem of language can be partially solved by IDE. IDE can already generate if err != nil branches. Goland can fold if err != nil branches. https://github.com/golang/vscode-go/issues/2311
9rx•7mo ago
What's to miss? Go has exception handlers and stack traces built-in and has had since day one. Even the standard library does it (e.g. encoding/json), if it is that you were waiting on some kind of "blessed permission" to proceed. Exception handling isn't appropriate for every situation (no tool is appropriate for every situation), but if it is for the kinds of problems you have, use it. The tools are there to use.
retrodaredevil•7mo ago
Proper exception handling in Java can feel verbose. In general you should not be adding checked exceptions to method signatures to make the compiler happy. You should be catching them and rethrowing if you cannot handle them.
o11c•7mo ago
But if you make them take a `context` object, there's no longer a problem.
One interesting observation - you can use them even for the initial "failed to allocate a context" by interpreting a NULL pointer as always containing an "out of memory" error.
vips7L•7mo ago
I really think that a language that invested in them properly more developers would come to see their value. I truly hope that will be Java some day. Making checked exceptions work across lambdas and providing language constructs to uncheck them would go miles.