Today, it seems like a normal solution to the problem to me, and the controversy seems silly. I have much experience with the map function from Javascript. It is too simple to be objectionable.
But in the '90s, I would also have had trouble understanding the transform. Lambdas/closures were unheard of to everyone except Lisp dweebs. Once I figured out what the code was doing, I would be suspicious of its performance and memory consumption. This was 1994! Kilobytes mattered and optimal algorithmic complexity was necessary for anything to be usable. Much safer to use a well understood for loop. I have plenty of experience making those fast, and that's what map() must be doing under the hood anyway.
But I would have been wrong! map() isn't doing anything superfluous and I can't do it faster myself. The memory consumption of the temporary decorated array is worth it to parse the last word N times instead of N log N times. Lisp is certainly a slow language compared to C, but that's not because of its lambdas!
That's not to say that idiomatic Perl doesn't have its quirks and oddities - Perl idioms do throw off people who aren't used to them. But this is a bit far even for Perl.
By then it was a commonly known technique, so it was a pattern a Perl dev would be expected to recognize.
That would be a powerful language feature, but alas
X.isWriteOnly() for X in aLanguagesIdonotUse
is merely akin to
X.hasTooManyRules() for X in aHumanLanguagesIdonotSpeak
1. It did not work because of course it didn't work 2. This meant that all data output of the system would done in XML and transformed, so I got to know xslt quite well 3. It did give me a fun moment at a conference where a speaker asked "Who knows the Schwartzian transform or has used it?" and only me and my coworkers raised their hands
C# evaluates the lambda only once per item, in input order: https://dotnetfiddle.net/oyG4Cv
Java uses "Comparators", which are called repeatedly: https://www.jdoodle.com/ia/1IO5
Rust has sort_by_key() which is encouraging... but also calls the key transformation many times: https://play.rust-lang.org/?version=stable&mode=debug&editio...
C++20 with std::range also calls the lambda repeatedly: https://godbolt.org/z/raa1766PG
Obviously, the Schwartzian Transform can be manually implemented in any modern language elegantly, but it's interesting that only C# uses it implicitly!
PS: The fancy solution would be a temporary "Hashset<T,K>" used to map values to the transformed sort keys, so that the mapping Foo(T t): K would only be called once for each unique input value 't'. That way the transformation could be invoked less than O(n) times!
I suspect the profession of software archeology form the Vernor Vinge science fiction novels will soon become a reality, especially now that old languages can be uplifted mechanically with AI.
The implementation is somewhat more sophisticated than just the naive transform:
> The current implementation is based on instruction-parallel-network sort by Lukas Bergdoll, which combines the fast average case of randomized quicksort with the fast worst case of heapsort, while achieving linear time on fully sorted and reversed inputs. And O(k * log(n)) where k is the number of distinct elements in the input. It leverages superscalar out-of-order execution capabilities commonly found in CPUs, to efficiently perform the operation.
> In the worst case, the algorithm allocates temporary storage in a Vec<(K, usize)> the length of the slice.
…oh boy.
In retrospect it doesn’t seem like these people prevailed.
RodgerTheGreat•6mo ago
inopinatus•6mo ago
throwanem•6mo ago