But I'm in the same boat. I love the expressiveness of chaining a whole thing into a single call, but I have to break it apart for my own sanity.
It's the same reason I don't like this style of function:
.map(var => var.toUpperCase())
Sure, it's great today but but I want to debug it I need to add `{}` in and/or if I need to add a second operations I need to add the curly braces as well. That's I prefer explicit: .map((var) => {
return var.toUpperCase();
})
Since it's much easier to drop in a debug line or similar without re-writing surrounding code. It also makes the git diff nicer in the future when you decided to do another operation within the `.map()` call.I've asked many people to re-write perfectly functioning code for this same reason. "Yes, I know you can do it all in 1 line but let's create variables for each step so the code is self-documenting".
When I write, I now use LLMs as an alternative to Grammarly and explicitly instruct it to not to rewrite. Sometimes the choice of words are very intentional and LLMs dont "feel"/understand the emotional reason behind that.
As a heuristic for attention, "does the claimed author care about, understand, or even know what they're writing about?" is not a bad start.
I was surprised to see such obviously low quality slop here on HN.
This is a solution that people fixed 25 years ago with detailedReturnObjectNames.
I'd expect that much of the time, the return variable is closely linked to the name of the function. So closely linked that it risks being redundant.
I probably won't adopt this as a convention, just because most of the time when I have access to a debugger I'll also have access to the source code and handle it myself. But I'm going to keep it in mind, and perhaps get fed up at some point and decide to adopt the idea. Thanks.
do you mean like streaming APIs where intermediate data structures aren't created? Is that what javascript does w/map() etc?
Yeah; this is what the new iterator methods were intended to solve
// This type of usage creates intermediates
array.map().filter()
// But these would not?
array.values().map().filter().toArray()
array.values().reduce()Mostly though, it's a combination of a thousand little things. There's no perfect bullet point list for 'this is AI', and if there was, AI would be able to hide it.
What humans are good at is seeing the uncanny valley in both images and prose. It's an old test [0] and we haven't managed to formalise it (or, as above, would even want to), but it's reliable for people with sufficient literacy.
I'm sure there's a case to be made for either, this post just doesn't do it.
- https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
- Array.prototype.values https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
This addresses a few criticisms here, and the main criticisms I had.
It feels pretty clear that the chains in that example (filter/map) are meant for operating on collections. And that if you're searching for a single item then chaining isn't the way to go?
Personally, if I knew I wanted only a single item I wouldn't feel more "nudged" towards appending a [0] on the end of a long chain rather than doing a refactor to the find().
As to:
data
.transform()
.normalize()
.validate()
.save();
here the problem isn't that you've done method chaining, it's that you've named your functions with terse names that you're going to forget what they do later on e.g. a generic "normalise" vs a "toLowerCase()" or whatever.As apples-to-apples unchained equivalent isn't really any better
const transformedData = transform(data);
const normalisedData = normalise(transformedData);
const validatedData = validate(normlisedData);
save(validatedData);
Is not more readable or understandableNothing earth-shattering, but who never had to debug something someone saw in production once and never again, then any additional useful info is weighted in gold.
I actually think the first code block is easier to read. It's a familiar (to me) and simple pattern that is quick to read. I don't get how it would require more "decoding" than the second example which is more disjointed and needs more "parsing" for such a trivial case. Maybe it's about what you're used to?
I agree there are downsides to chaining. With more complex operations it can complicate debugging, and readability can suffer, so chaining is not a good fit there.
I thought this would have some decent insight as to memory usage or something.
Nope. It's just clinically stupid.
His first example is pretty much the same either way. I would say the "better" way is a little more involved to read. But it's nothing either way.
His second example makes unnecessary chains. He filters, then maps. When he could just use find and get the name like he does in the "steps" version.
Maybe we need fewerthingssmitty
For me, the problems with chaining from the point of mostly maintaining existing software are:
1. Harder to impossible to reason about.
As the author alludes to, 1-2 chains are fine, but it starts getting impossible when you get into a territory where you have a longer chain which has a deeper call tree. This happens over time where you start with a smaller chain and people start lengthening it, adding helper functions which grow into large call trees, etc. This makes it so that you have sort of a blackbox pipeline that is, at the very least, annoying and time-consuming to inspect.
2. Harder to debug
Author tries to mention this but he seems to fail/stop short of pointing out what is wrong with the example he provides. For me, I work with Kotlin. In Kotlin, you cannot put a breakpoint in the middle of the chain! As far as I know, you can only put a breakpoint inside of the chained function calls and do step-into/step-over and such, but you cannot put a breakpoint in-between chain function calls. This means that debugger is basically useless if your codebase looks as described in my previous point. The solution is to write a bit more code at the start, naming each variable. This makes it much easier to debug the code/logic (because you can put a breakpoint on the specific variable/step you are interested in) and, more importantly, to understand, because you explain the steps with the variable names and optionally also with comments.
3. Related problem - return chaining
Another issue I have in codebases I inherited is what I would describe as return chaining. It is what happens when you have code which returns a function call and the called function does the same thing and so on and so on. Minimalistic example:
foo() {
return x
.map()
}
baz() {
return foo()
.map()
}
fbaz() {
return baz()
.map()
}
This way, there is usually no good place to inspect the values and it is hard to reason about what even is the return type/value. Yes, the type system can take it, but good luck figuring out what is Map<Map<String,String>,List<String>>. Do this instead even though it looks "less clean"/uses a supposedly useless variable: foo() {
const helpfulName = x.map()
return helpfulName
}
baz() {
const anotherHelpfulName = foo.map()
return anotherHelpfulName
}
fbaz() {
const superHelpfulName = baz.map()
return superHelpfulName
}
In summary: please, for the love of all that is holy, resist the urge to write function chains, always store meaningful intermediary values in named variables with "why" comments in relevant places and do so especially with return values.2. I agree in general when talking about more complex operations. Simple transformation and filtering rarely needs intermediate variables for readability or debugging. And the naming of result variable already describes the final collection.
3. Never had to deal with this kind of code but I haven't used Kotlin.
2. Yes, easy filterings usually don't need to be broken down/named, but it really depends. At the very least, if the culture is to name intermediary values, you might accidentally get useful information from the variable names even if people weren't diligently writing explanatory why comments.
3. This isn't Kotlin related, it is just that if you do not have a language/codebase with branded types (or some type system property I don't know the name of), the type system might only infer the base primitives of the result, ending up with stuff like the type I mentioned.
To me reduce is very easy to reason about and makes it super easy to properly filter, combine, extract values without ending with filters on filters on maps and maps
Inevitably you're going to end up having to debug that each of those steps is correct, for that you'll find it a lot easier to break it out.. and the next person who has to do it will as well.
I do think the example is somewhat loaded though, rename "result" to "top5ActiveUserNames" would do a lot there.
const filteredUsers = users.filter();
Why create a variable if we use it on a single place anyway. Feels redundant like prefixing "I" in interface names.
So if you like intermediate variables, great. I like them too. I also like having the option of chaining where it's necessary or just more expressive. Writing composable APIs means everyone wins.
> Chaining nudges you toward “process everything,” even when that’s not what you meant to do.
const firstActiveUser = users
.filter(user => user.active)
.map(user => user.name)[0];
> This filters the entire array, maps the result, and then grabs one item.> When what you actually wanted was:
const user = users.find(user => user.active);
const name = user?.name;
Then if that’s what you wanted, do that! const name = users.find(user => user.active)?.name
The fact that you processed everything in the first example was entirely your choice, it has nothing to do with chaining.I love chaining because it reduce the number of occurrences of that problem.
prismatix•2d ago