There are async-safe variants but the typical lodash-style implementations are not. If you want the semantics of "return a promise when the function is actually invoked and resolve it when the underlying async function resolves", you'll have to carefully vet if the implementation actually does that.
There are always countless edge cases that behave incorrectly - it might not be important and can be ignored, but while the general idea of debouncing sounds easy - and adding it to an rxjs observable is indeed straightforward...
Actually getting the desired behavior done via rxjs gets complicated super fast if you're required to be correct/spec compliant
For example, debouncing is often recommended for handlers of the resize event, but, in most cases, it is not needed for handlers of observations coming from ResizeObserver.
I think this is the case for other modern APIs as well. I know that, for example, you don’t need debouncing for the relatively new scrollend event (it does the debouncing on its own).
Human interaction with circuits, sensors, receptors, occur like that
When we click a keyboard key or switch circuit switch the receptors are very sensitive
we feel we did once but during that one press our fingers hands are vibrating multiple times hence the event get registered multiple times due to pulsating, hence all after first event, the second useful event that can be considered legitimate if the idle period between both matches desired debounce delay
in terms of web and software programming or network request handling
it is used as a term to debounce to push away someone something aggresive
Example wise
a gate and a queue Throttling -> gate get opened every 5 min and let one person in, no matter what
Debounce -> if the persons in queue are deliberately being aggressive thrashing at door to make it open we push them away Now instead of 5 min, we tell them you have to wait another 5 min since you are harassing, if before that they try again, we again tell them to wait another 5 min Thus debounce is to prevent aggresive behaviour
In terms of say client server request over network
We can throttle requests processed by server, let say server will only process requests that happen every 5 min like how apis have limit, during that before 5min no matter how many request made they will be ignored
But if client is aggressive like they keep clicking submit button, keep making 100s of requests that even after throttling server would suffer kind of ddos
so at client side we add debounce to button click event
so even if user keep clicking it being impatient, unnecessary network request will not be made to server unless user stop
That said, this is a good resource on the original meaning: https://www.ganssle.com/debouncing.htm
kazinator•6h ago
The analogy here is poor; reducing thrashing in those obnoxious search completion interfaces isn't like debouncing.
Sure, if we ignore everything about it that is not like debouncing, and we still have something left after that, then whatever is left is like debouncing.
One important difference is that if you have unlimited amounts of low latency and processing power, you can do a full search for each keystroke, filter it down to half a dozen results and display the completions. In other words, the more power you have, the less important it is to do any "debouncing".
Switch debouncing is not like this. The faster is your processor at sampling the switch, the more bounces it sees and consequently the more crap it has to clean up. Debouncing certainly does not go away with a faster microcontroller.
maxbond•5h ago
I think it makes sense if you view it from a control theory perspective rather than an embedded perspective. The mechanics of the UI (be that a physical button or text input) create a flaggy signal. Naively updating the UI on that signal would create jank. So we apply some hysteresis to obtain a clean signal. In the day way that acting 50 times on a single button press is incorrect behavior, saving (or searching or what have you) 50 times from typing a single sentence isn't correct (or at least undesired).
The example of 10ms is way too low though, anything less than 250ms seems needlessly aggressive to me. 250ms is still going to feel very snappy. I think if you're typing at 40-50wpm you'll probably have an interval of 100-150ms between characters, so 10ms is hardly debouncing anything.
Findecanor•4h ago
maxbond•4h ago
zeta0134•3h ago
gibsonsmog•2h ago
account42•2h ago
WTF no it won't.
jiehong•5h ago
Doesn't really apply to a search box, where it's more of a delayed event if no event during a specific time window, only keeping last event.
Tade0•5h ago
But you don't want that, as it's useless. Until the user actually finished typing, they're going to have more results than they can meaningfully use - especially that the majority will be irrelevant and just get in the way of real results.
The signal in between is actually, really not useful - at least not on first try when the user is not aware what's in the data source and how can they hack the search query to get their results with minimal input.
maxbond•4h ago
meindnoch•3h ago
Don't make assumptions about what the user may or may not want to search for.
E.g. in my music collection I have albums from both !!! [1] and Ø [2]. I've encountered software that "helpfully" prevented me from searching for these artists, because the developers thought that surely noone would search for such terms.
_______
[1] https://www.discogs.com/artist/207714-!!! ← See? The HN link highlighter also thinks that URLs cannot end with !!!.
[2] https://www.discogs.com/artist/31887-Ø
extra88•3h ago
Tade0•2h ago
extra88•2h ago
account42•2h ago
prmph•1h ago
Even the 10ms in TFA is too low. I personally wouldn't mind (or probably even notice) a delay of 100 ms.
account42•16m ago
Whatever delay you add before showing results doesn't get hidden by the display and user's reading latency, it adds to it.
Findecanor•5h ago
I've programmed my own keyboards, mice and game controllers. If you want the fastest response time then you'd make debouncing be asymmetric: report press ("Make") on the first leading edge, and don't report release ("Break") until the signal has been stable for n ms after a trailing edge. That is the opposite of what's done in the blog article.
Having a delay on the leading edge is for electrically noisy environments, such as among electric motors and a long wire from the switch to the MCU, where you could potentially get spurious signals that are not from a key press. Debouncing could also be done in hardware without delay, if you have a three-pole switch and an electronic latch.
A better analogy would perhaps be "Event Compression": coalescing multiple consecutive events into one, used when producer and consumer are asynchronous. Better but not perfect.
haileys•5h ago
amelius•4h ago
jon-wood•4h ago
account42•2h ago
If it really only makes sense to perform the action once than disable/remove the button on the first click. If it makes sense to click the button multiple times then there should be no limit to how fast you can do that. It's really infuriating when crappy software drops user input because its too slow to process one input before the next. There is reason why input these days comes in events that are queued and we aren't still checking if the key is up or down in a loop.
matthewmacleod•3h ago
Yes, it's not an exact comparison (hence analogy) – but it's not anything worth getting into a fight about.
account42•2h ago
meindnoch•3h ago
soulofmischief•1h ago
The reality is that language evolves all the time through specialized use and adoption, web development is no different. Every profession and craft builds a pattern language from both borrowed and new terms.
You can explain the phenomenon without patronizing and insulting anyone who works on frontend code.
bravesoul2•2h ago
account42•2h ago
davnicwil•1h ago
Come to think of it throttle is the much easier to understand analogy.