I remember trying to use Immutable.js back in the day. It's probably pretty great with Typescript these days, but I remember it was kinda hell with vanilla JS back then since I'd accidentally do things like assign `thing.foo = 42` instead of `thing.set('foo', 42)` and I'd comb through the code to see why it wouldn't work, and I remember not knowing when I had a JS object or a Record. All things fixed by Typescript, of course.
I used Ramda for all data transformations as data flowed through the app. I ran the code in React Native also and could count the precise number the of operations a megabyte or two of data updating every few seconds that included data visualizations needed. Nothing was hidden!
A decade ago I attached myself to Mongo, Express, Angular, Node stack and it was a disaster waste of time and nobody was hiring for that skill set. Beginning of 2018 I started with the React, Redux, Immutable, and ReSelect and it was a massive success. I got very lucky when I started that project that was the hot new JavaScript stack all the bloggers were raving about. A broken clock is correct twice a day.
But at the very least, if you're going to memoize immutable values, please do it in a way that allows garbage collection. JavaScript has WeakRef and FinalizationRegistry. (Why it doesn't provide the obvious WeakCache built on those is a mystery, though.)
The issues won't be visible on a toy example like making mazes a few hundred elements across, but if you use these techniques on real problems, you absolutely need to cooperate with the garbage collector.
Personally I'd design it with Point.Equals(p1, p2) static method and forego using referential equality, then the LRU cache could prevent runaway memory usage but tbh this is all bikesheding for this use case anyways :). The original code is fine.
On the one hand, allowing any function to be eligible for TCO means that the developer won't know if the code was optimized or not. A trivial change can easily convert a fast function to one that blows the stack. Additionally, every function created- or at least every named function- would need to be analyzed, causing a performance hit for everyone, even those who never write a recursive function in their lives.
On the hand, some argued that TCO eligibility should be explicitly opt-in with some sort of keyword or annotation. If a function so annotated did not end up being a valid target for TCO, the script would not compile, similar to a syntax error. This is an even more harsh failure mode than the implicit version, but would have been easier to identify.
I vaguely recall chrome devs generally being in favor of explicit annotations, and Safari implicit. I could be completely wrong on this, and I don't think anyone was particularly enthused about the trade-offs either way.
A Skeptic’s Guide To Functional Programming With JavaScript
https://jrsinclair.com/skeptics-guide
I’d recommend it for functional-curious JS devs like myself.
jefecoon•5mo ago
tisdadd•5mo ago