$arr = [
new Widget(tags: ['a', 'b', 'c']),
new Widget(tags: ['c', 'd', 'e']),
new Widget(tags: ['x', 'y', 'a']),
];
$result = $arr
|> fn($x) => array_column($x, 'tags') // Gets an array of arrays
|> fn($x) => array_merge(...$x) // Flatten into one big array
|> array_unique(...) // Remove duplicates
|> array_values(...) // Reindex the array.
;
feels much more complex than writing $result = $arr->column('tags')->flatten()->unique()->values()
having array extension methods for column, flatten, unique and values.1: https://kotlinlang.org/docs/extensions.html#extension-functi...
Let's say you add a reduce in the middle of that chain. With extension methods that would be the last one you call in the chain. With pipes you'd just pipe the result into the next function
The use-case in the article could still be solved easier with extension methods in my opinion :-)
The PHP pipes as described in the articles about it will require a bunch of wrapping anyway so you could just do that. There are several alternatives, from a function or method that just converts from raw array to class, to abstractions involving stuff like __invoke, __call, dispatchers and such.
Also the expectation to not have to put facades on libraries is a bit suspicious, in my experience it is very common. I find it unlikely you actually want to use raw arrays instead of leveraging type guards in your code.
I'm still not clear over what you want to do or why the third party library expects there to be a hacked in 'fluent' method on the Stringable.
Personally I'm not a fan of the Str/Stringable API:s, I find it weird and confusing to mix methods for file paths, strings, encryption and so on. Actually, I'm more of a Symfony person for reasons like this.
While I know that there are Collection classes in Symfony, Laravel, etc., I'm not a huge fan of wrapping a PHP array with a class to get method chaining, even with generators.
$sum = [1, 2, 3]->filter(fn($x) => $x%2!= 0)->concat([5,7])->sum();
cannot be solved with traits. Additionally, I think traits should be used very carefully and they do not have this many use cases that aren't a code smell to me.``` val arr = ... val result = arr .let { column(it, "tags") .let { merge(it) } .let { unique(it) } .let { values(it) } ```
You add function references for single-argument functions too:
``` arr.let(::unique) // or (List<>::unique), depends on the function ```
all without adding a special language construct.
While converting arrays to collection-object is a suitable option that does work, it would feel much more "native", if there were extension methods for Iterable / Traversable.
# pipeline functional style
(1..5)
==> map { $_ * 2 }
==> grep { $_ > 5 }
==> say(); # (6 8 10)
# method chain OO style
(1..5)
.map( * * 2)
.grep( * > 5)
.say; # (6 8 10)
uses ==> and <== for leftwardtrue it is syntax sugar, but often the pipe feed is quite useful to make chaining very obvious
I really believe the thing PHP needs the most is a rework of string / array functions to make them more consistent and chain able. Now they are at least chainable.
I'm not a fan of the ... syntax though, especially when mixed in the same chain with the spread operator
My initial instinct would be to write like this:
`$result = $arr
|> fn($arr) => array_column($arr, 'tags') // Gets an array of arrays
|> fn($cols) => array_merge(...$cols)`
Which makes me wonder how this handles scope. I'd imagine the interior of some chained function can't reference the input $arr, right? Does it allow pass by reference? function ($parameter) use ($data) { ... }
to capture stuff from the local environment.Edit: And you can pass by reference:
> $stuff = [1]
= [
1,
]
> $fn = function ($par) use (&$stuff) { $stuff[] = $par; }
= Closure($par) {#3980 …2}
> $fn(2)
= null
> $stuff
= [
1,
2,
]
Never done it in practice, though, not sure if there are any footguns besides the obvious hazards in remote mutation.My feeling is that this makes the code less legible. I'd rather write 5 lines of code that mutate an object or return a copy than do a pipe this way. I'm sort of not excited to start running into examples of this in the wild.
string functions use (haystack, needle) and array functions use (needle, haystack)
because that's the way the underlying C libraries also worked
array_filter takes (arr, callback)
https://www.php.net/manual/en/function.array-filter.php
array_map takes (callback, arr)
Array filter is "filter this array with this function".
Array map is "map this function over this array".
But I agree any replacement function should be consistent with Haskell.
"Filter for this function in this array"
"Map over this array with this function"
One filters something with something else, in the real world. Filter water with a mesh etc.
And (in maths, at least) one maps something onto something else. (And less commonly one maps an area onto paper etc.)
Just because you can make your two sentences does not make them natural word order.
There’s enough viral videos online of how even neighbouring European counties order common sentences differently. Even little things like reading the time (half past the previous hour vs half to the next hour) and counting is written differently in different languages.
So modelling the order of parameters based on English vernacular doesn’t make a whole lot of sense for programming languages used by programmers of all nationalities.
Well that’s good, because I didn’t.
Yes, but that's the opposite of what you said earlier. You might map x onto 2*x, for example. Or, if you're talking about a collection, you might map the integers 0..10 on to double their value. Data first, then the way you're manipulating it. I'm a mathematician and this is what makes sense to me.
I would only say "map this function..." if the function itself is being manipulated somehow (mapped onto some other value).
When you use the correct verbiage, the parameter order makes sense.
One function works against a single element, whereas the other works against multiple. In that case, the parameter order is more meaningful. You can use array_walk if you want (arr, callback), but that only works against a single array -- similarly to array_filter.
I feel like this is a weak defence of the internally inconsistent behaviour. As someone who has been programming with PHP for over twenty years now, most of them professionally, I still cannot remember the needle/haystack order in these functions, I thank intellisense for keeping me sane here.
As evident with this pipe operator, or with for example Attributes, PHP does not need to religiously follow the C way of doing things, so why not improve it instead of dismissing it as "it is the way it is because that is the way it was"?
There isn't a good reason for PHP to have inherited C's issues here.
We are not in the early days though, and in many other aspects PHP evolved greatly.
If we want to change the param order of str/array functions for php, I think we should start with fixing the C libraries. That seems like a better starting point. The impact will certainly be more beneficial to even more developers than just php.
The fact that they are chaotic since 30 years ago is not a valid reason for keeping them chaotic right now.
Also, I'm not even arguing they should change the existing functions, that would break all existing code for almost no reason.
I think they should "simply" support methods on primitives, and implement the main ones in a chainable way:
"test string"->trim()->upper()->limit(100);
[0,1,2]->filter(fn ($n) => $n % 2 === 0)->map(fn($n) => $n * 2);
I would love this so much
How it that consistent?
$result = $arr
|> select_column('tags') // Gets an array of arrays
|> fn($x) => array_merge(...$x) // Flatten into one big array
|> array_unique // Remove duplicates
|> array_value // Reindex the array.For example ktor (one of the server frameworks) can actually work with Kotlin native but it's not that well supported. This is not using Graal or any of the JVM stuff at runtime (which of course is also a viable path but a lot more heavyweight). With Kotlin native, the Kotlin compiler compiles directly to native code and uses multiplatform libraries with native implementations. There is no Java standard library and none of the jvm libraries are used.
The same compiler is also powering IOS native with Compose multiplatform. On IOS libraries are a bit more comprehensive and it's starting to become a proper alternative to things like flutter and react native. It also has pretty decent objectc and swift integration (both ways) that they are currently working on improving.
In any case, it's pretty easy to write a command line thingy in Kotlin. Use Klikt or similar for command line argument parsing.
Jetbrains seems to be neglecting this a bit for some reason. It's a bit of a blind spot in my view. Their wasm support has similar issues. Works great in browsers (and supported with compose as well) but it's not a really obvious choice for serverless stuff or edge computing just yet; mainly because of the library support.
Swift is a bit more obvious but has the issue that Apple seems to think of it as a library for promoting vendor lockin on their OS rather than as a general purpose language. Both have quite a bit of potential to compete with Go for system programming tasks.
kotlin always in my mind was android and jvm so i never paid attention to it
Optionally with a better language you know what order params as passed (array_map / array_filter), but in PHP its an coin toss.
This feels very bolted on and not suited for the stdlib at all.
PHP devs should instead FIRST focus on full unicode support (no, the mb_real_uppercase wont do), and only then focus on a new namespaced stdlib with better design.
We definitely need a better stdlib with appropriate data structures
I think initiative like this drive a need for a more consistent, and even if slow, PHP has been deprecated/reworking its stdlib so I'm hopeful on this.
I think that callables will end with being useless in this context and everyone will pipe closures to put that $x wherever the stdlib imposes.
I also prefer the look of ->, it’s _cool_
-> in PHP and C++ looks clean by comparison.
I'll never forgive them for the brain fart they made of the namespace separator, though.
You mean the backslash? What's wrong with that?
It was decided almost 20 years ago so I'm totally used to it and there's no point arguing about it anymore. But the decision to reuse the backslash as a namespace separator still causes inconvenience from time to time. For example, when you write PSR-4 configuration in composer.json, all the backslashes need to be doubled, including (and especially!) the trailing backslash.
Just to be clear: consistency does very much matter. The mental load of reading totally different styles of code is awful and a waste of energy.
Because it simply can't do that in a retro-compatible way. -> isn't so bad, C/C++ uses that as well. as for $ I guess it came from Perl. The point is already used for string concatenation, where other languages would overload the + operator.
I actually like the clarity these dollar signs add in a code base. Makes it easier to recognise (dynamic) functions, and makes it harder to accidentally shadow methods.
Other languages will let you do `const Math = {}` and nuke the entire math library, or write stuff like `int fopen = 0;` to make the fopen method call inaccessible in that scope. With PHP, you don't need to restrict your variable name to "something that hopefully won't conflict with an obscure method".
The -> is a leftover from an older programming language that I'd rather have replaced by a ., but not at the cost of breaking existing code (which it surely would).
Isn't it because . was already used for string concatenation in PHP. I mean the -> syntax wasn't invented by PHP but it didn't just inherit it without thought either.
As a result, taking a php 5.2 script and moving it up to 8.5 is super easy, and taking a PHP 4 one is barely harder only longer (since it probably uses the horrors that were register_globals and co).
Ultimately, I prefer this than a fragmented ecosystem impossible to resolve.
$myString.trim().replace("w", "h");
Which has the advantage of also offering a clean alternative to the fragmented stdlib.
$myString->trim()->replace("w", "h");
And those functions can be business logic, or validation, or... Not just object methods
With this sort of "just" I could build Paris out of matchsticks
I believe they have solved this problem by now. Though no idea how.
function($x) { return array_key_exists('needle', $x); }
Or using the arrow function syntax: fn($x) => array_key_exists('needle', $x)
The same trick also helps when you need to use functions with mandatory extra parameters, functions with pass-by-value parameters, etc.For built-in functions, if the function does not accept any parameters, it cannot be used in a chain. For user-land PHP functions, passing a parameter to a function that does not accept any parameters does not cause an error, and it is silently ignored.
With the pipe operator, the return value of the previous expression or the callable is always passed as the first parameter to the next callable. It is not possible to change the position of the parameter."
https://php.watch/versions/8.5/pipe-operator
In the light of these limitations I would not call the Elixir implementation "slightly fancier".
I'm not so sure I'll be upgrading my local PHP version just for this but it's nice that they are adding it, I'm sure there is a lot of library code that would look much better if rewritten into this style.
$result = $arr
|> fn($x) => array_column($x, 'tags')
|> fn($x) => array_merge(...$x)
|> array_unique(...)
|> array_values(...)
VS array_values(array_unique(array_merge(...array_column($arr, 'tags')))); $tags = ...array_column($arr, 'tags');
$merged_tags = array_merge($tags);
$unique_tags = array_unique($merged_tags);
$tag_values = array_values($unique_tags);
It also makes it easier to inspect the values after each step.Readability is mostly matter of habit. One reads easily what he/she is used to read.
They can be. It depends on the language, interpreter, compiler, and whether you do anything with those intermediate variables and the optimiser can get rid of them.
That's like saying someone would use this:
$result = $arr |> fn($x) => array_column($x, 'tags') |> fn($x) => array_merge(...$x) |> array_unique(...) |> array_values(...)
which is harder to reason about than the nested functions. array_values( array_unique( array_merge( ...array_column($arr, 'tags') ) ) );
or array_values(
array_unique(
array_merge(
...array_column($arr, 'tags')
)
)
); $result = $arr
|> fn($x) => array_column($x, 'tags')
|> fn($x) => array_merge(...$x)
|> array_unique(...)
|> array_values(...)
vs array_values(
array_unique(
array_merge(
...array_column($arr, 'tags')
)
)
);
With pipes you have linear sequence of data transformations. With nested function calls you have to start with innermost function and proceed all the way top the outermost layer.Preferably you should also be sure that the functions are compatible with the data type going in and only rarely have to break it to dump data mid-chain. If you expect that kind of erroring it's likely a builder-chain with -> is a better alternative and do logging in the methods.
With PHP allowing variable initialization in one branch but not the other, and continuing execution by default when an undeclared variable is passed, declaring more variables can lead to an annoying class of bugs that would require significant (breaking) changes to the core language to completely eliminate.
It's much easier skim with the pipe operator and it's more robust too (for example reordering is a pain with variables, it's easy to introduce errors).
One way this is prevented in PHP is just using functions. But then you have functions just for the sake of scope, which isn't really what they're for. That introduces other annoyances.
Is it though? I don't think so.
https://wiki.php.net/rfc/partial_function_application_v2 https://wiki.php.net/rfc/function-composition
Does PHP support iterator-like objects? Like Python I mean, where mydict.values() produces values on demand, not immediately realised as a list. Or are all steps necessarily guaranteed to be fully realised into a complete list?
Meanwhile, I'm confused as to why it sometimes says “map” and sometimes “array_map”. The latter is what I'm familiar with and I know that it operates on a whole array with no lazy evaluation. If “map” isn't just a shorthand and actually creates a lazy-evaluated iterable, then I'm confused as to why the function composition would make any difference.
I don't think there's any significant push for an even terser syntax at the moment.
I would likely never touch it as there are too many languages to use and what I know is more than enough to do my job, but I am super excited to see languages like PHP that aren't mainstream in my bubble to keep evolving
There was a point were I thought the language and it ecosystem was going down the drain but then they recovered and modern php is 90% what do you want to do and don't worry about the how, it's easy.
I don't use it much anymore, but every time I do all I see are possibilities.
Of course not if you use vm or serverless or whatever like this, but for a basic here is my crude app, that's what you do.
Or if you want to go old school sure, just scp that directory, it still works like it did 30 years ago.
The standard library has a lot of good stuff for calling API:s, handling JSON, shelling out, string juggling and HTML publishing on a socket. In every typical install you also have common database interfaces. I've done so much problem solving at breathtaking speed in single file PHP scripts and PsySH over the years.
The threading story isn't or wasn't very good so typically I've done logic in PHP and then driven it from something like a Scheme, Picolisp or Elixir when I've needed it.
$arr
|> fn($x) => array_column($x, 'tags')
Why doesn't this work? $arr
|> array_column(..., 'tags')
And when that doesn't work, why doesn't this work? $arr
|> array_uniqueThey write that elixir has a slightly fancier version, it is likely around this, they mean (where elixir has first class support for arity > 1 functions)
So where in Python you would say e.g.
callbacks = [f, g]
PHP requires the syntax $callbacks = [f(...), g(...)];
As for the purpose of the feature as a whole, although it seems like it could be replaced with function composition as mentioned at the end of the article, and the function composition could be implemented with a utility function instead of dedicated syntax, the advantage of adding these operators is apparently [2] performance (fewer function calls) and facilitating static type-checking.[1] https://wiki.php.net/rfc/first_class_callable_syntax
[2] https://wiki.php.net/rfc/function-composition#why_in_the_eng...
https://wiki.php.net/rfc/partial_function_application_v2 https://wiki.php.net/rfc/pipe-operator-v3#rejected_features
fun showit( s : string )
s.encode(3).count.println
However, this is of course impossible to implement in most languages as the dot is already meaningful for something else.Are there other syntax helpers in that language to overcome this?
D has had this for decade(s): https://tour.dlang.org/tour/en/gems/uniform-function-call-sy...
Nim too has it: https://nim-by-example.github.io/oop/
Forget about transforming existing code, it makes new code much more reasonable (the urge to come up with OOPslop is much weaker when functions are trivial) — they're programming languages for a reason.
For short constructions '$out = sort(fn($in)' is really easier to read. For longer you can break them up in multiple lines.
$_ = fn_a($in)
$_ = fb_b($_)
$out = fn_c($_)
Is it really "cognitive overhead" to have the temporary variable explicit? Being explicit can be a virtue. Readability matters in a programming language. If nothing else I think Python taught us that.I am skeptical to these types of sugar. Often what you really want is an iterator. The ability to hide that need carries clear risk.
sum 1 2
|> multiply 3
and it works because |> pushes the output of the left expression as the last parameter into the right-hand function. multiply has to be defined as: let multiply b c = b \* c
so that b becomes 3, and c receives the result of sum 1 2.RHS can also be a lambda too:
sum 1 2 |> (fun x -> multiply 3 x)
|> is not a syntactic sugar but is actually defined in the standard library as: let (|>) x f = f x
For function composition, F# provides >> (forward composition) and << (backward composition), defined respectively as: let (>>) f g x = g (f x)
let (<<) f g x = f (g x)
We can use them to build reusable composed functions: let add1 x = x + 1
let multiply2 x = x \* 2
let composed = add1 >> multiply2
F# is a beautiful language. Sad that M$ stopped investing into this language long back and there's not much interest in (typed) functional programming languages in general.It is indeed a shame that F# never became a first class citizen.
OCaml is a great language, as are others in the ML family. Isabelle is the first language that has introduced the |> pipe character, I think.
Mind you, I know and like Haskell, but its issues are highly tied to the failure of the simple haskell initiative (also the dreadful state of its tooling).
I thought for a while I'd be able to focus on getting jobs that liked haskell. it never happened.
Also, I've found Haskell appropriate for some one-off tasks over the years, e.g.
- Extracting a load of cross-referenced data from a huge XML file. I tried a few of our "common" languages/systems, but they all ran out of memory. Haskell let me quickly write something efficient-enough. Not sure if that's ever been used since (if so then it's definitely tech debt).
- Testing a new system matched certain behaviours of the system it was replacing. This was a one-person task, and was thrown away once the old system was replaced; so no tech debt. In fact, this was at a PHP shop :)
I use spark for most tasks like that now. Guido stole enough from haskell that pyspark is actually quite appealing for a lot of these tasks.
He didn't do his homework. Guido or whoever runs things around the python language committee nowadays didn't have enough mental capacity to realize that the `match` must be a variable bindable expression and never a statement to prevent type-diverging case branches. They also refuse to admit that a non-blocking descriptor on sockets has to be a default property of runtime and never assigned a language syntax for, despite even Java folks proving it by example.
There is also some popular user facing software like Pandoc, written in Haskell. And companies using it internally.
The Agda compiler, Pugs, Cryptol, Idris, Copilot (not that copilot you are thinking of), GHC, PureScript, Elm…
These might not be mainstream, but are (or were for Pugs, but the others are current) important within their niche.
this is plain and unsubstantiated FUD
> Haskell has none after 30 years
> I know Haskell
I doubt it
If you are to add community notes to my comments, at least add the part that clarifies that I only lambast incompetence and lies.
> risking giving the Haskell community a bad name
as opposed to those that spread FUD, I suppose? It's not the first time I'm asking this question, so what's your take on people who inflate their credibility by telling lies about the tech they clearly don't know?
https://redmonk.com/sogrady/2020/02/28/language-rankings-1-2...
https://redmonk.com/sogrady/2025/06/18/language-rankings-1-2...
I think of languages as falling in roughly 3 popularity buckets:
1. A dominant conservative choice. These are ones you never have to justify to your CTO, the "no one ever got fired for buying IBM" languages. That's Java, Python, etc.
2. A well-known but deliberate choice. These are the languages where there is enough ecosystem and knowledge to be able to justify choosing them, but where doing so still feels like a deliberate engineering choice with some trade-offs and risk. Or languages where they are a dominant choice in one domain but less so in others. Ruby, Scala, Swift, Kotlin.
3. Everything else. These are the ones you'd have to fight to use professionally. They are either new and innovative or old and dying.
In 2020, Haskell was close to Kotlin, Rust, and Dart. They were in the 3rd bucket but their vector pointed towards the second. In 2025, Kotlin and Dart have pulled ahead into the second bucket, but Haskell is moving in the other direction. It's behind Perl, and Perl itself is not exactly doing great.
None of this is to say that Haskell is a bad language. There are many wonderful languages that aren't widely used. Popularity is hard and hinges on many extrinsic factors more than the merits of the language itself. Otherwise JavaScript wouldn't be at the top of the list.
> It's behind Perl, and Perl itself is not exactly doing great.
Your comment reminded me of gamers who "play games" by watching "letsplay" videos on youtube.
It's usually called operator because it uses an infix notation.
And there are also the reverse pipes (<|, <|| and <|||)
F# is, for me, the single most ergonomic language to work in. But yeah, M$ isn't investing in it, so there are very few oppurtunities to actually work with f# in the industry either.
Imagine you're just scanning code you're unfamiliar with trying to identify the symbols. Make sense of inputs and outputs, and you come to something as follows.
$result = $arr
|> fn($x) => array_column($x, 'values')
|> fn($x) => array_merge(...$x)
|> fn($x) => array_reduce($x, fn($carry, $item) => $carry + $item, 0)
|> fn($x) => str_repeat('x', $x);
Look at this operation imaging your reading a big section of code you didn't write. This is embedded within hundreds or thousands of lines. Try to just make sense of what "result" is here? Do your eyes immediately shoot to its final line to get the return type?My initial desire is to know what $result is generally speaking, before I decide if I want to dive into its derivation.
It's a string. To find that out though, you have to skip all the way to the final line to understand what the type of $result is. When you're just making sense of code, it's far more about the destination than the path to get there, and understanding these require you to read them backwards.
Call me old fashioned, I guess, but the self-documentating nature of a couple variables defining what things are or are doing seems important to writing maintainable code and lowering the maintainers' cognitive load.
$values = array_merge(...array_column($arr, 'values'));
$total = array_reduce($values, fn($carry, $item) => $carry + $item, 0);
$result = str_repeat('x', $x);The pipe operator (including T_BLING) was one of the few things I enjoyed when writing Hack at Meta.
I think the parent is referring to what the result _means_, rather than its type. Functional programming can, at times, obfuscate meaning a bit compared to good ol’ imperative style.
Same as with `array_merge(...array_column($arr, 'values'));` or similar nested function calls.
> Imagine you're just scanning code you're unfamiliar with trying to identify the symbols. Make sense of inputs and outputs, and you come to something as follows.
We don't have to imagine :) People working in languages supporting pipes look at similar code all day long.
> but the self-documentating nature of a couple variables defining what things are or are doing seems important to writing maintainable code
Pipes do not prevent you from using a couple of variables.
In your example I need to keep track of $values variable, see where it's used, unwrap nested function calls etc.
Or I can just look at the sequential function calls.
What PHP should've done though is just pass the piped value as the first argument of any function. Then it would be much cleaner:
$result = $arr
|> array_column('values')
|> array_merge()
|> array_reduce(fn($carry, $item) => $carry + $item, 0)
|> fn($x) => str_repeat('x', $x);
I wouldn't be surprised if that's what will eventually happenQuick summary: Hack used $$ (aka T_BLING) as the implicit parameter in a pipeline. That wasn't accepted as much fun as the name T_BLING can be. PHP looked for a solution and started looking for a Partial Function Application syntax they were happy with. That effort mostly deadlocked (though they hope to return to it) except for syntax some_function(...) for an unapplied function (naming a function without calling it).
Seems like an interesting artifact of PHP functions not being first class objects. I wish them luck on trying to clean up their partial application story further.
But maybe also, the pipe syntax would be better as:
$arr
|> fn($x) => array_column($x, 'values')
|> fn($x) => array_merge(...$x)
|> fn($x) => array_reduce($x, fn($carry, $item) => $carry + $item, 0)
|> fn($x) => str_repeat('x', $x)
|= $result; $result = $obj->query($sqlQuery)->fetchAll()[$key]
so while the syntax is not my favorite, it at least maintains consistency between method chaining and now function chaining (by pipe).I don't find the pipe alternative to be much harder to read, but I'd also favour the first one.
In any case, we shouldn't judge software and it's features on familiarity.
$result = $arr
->column('values')
->merge()
->reduce(fn($carry, $item) => $carry + $item, 0)
->repeat('x');
I think this just comes down to familiarity. $result = $arr |> fn($x) |=>
array_column($x, 'values'),
array_merge(...$x),
array_reduce($x, fn($carry, $item) => $carry + $item, 0),
str_repeat('x', $x);
As teaching the parser to distribute `fn($x) |=> ELEM1, ELEM2` into `fn($x) => ELEM1 |> fn($x) => ELEM2 |> …` so that the user isn’t wasting time repeating it is exactly the sort of thing I love from Perl, and it’s more plainly clear what it’s doing — and in what order, without having to unwrap parens — without interfering with any successive |> blocks that might have different needs.Of course, since I come from Perl, that lends itself well to cleaning up the array rollup in the middle using a reduce pipe, and then replacing all the words with operators to make incomprehensible gibberish but no longer needing to care about $x at all:
$result = $arr |> $x:
||> 'values'
|+< $i: $x + $i
|> str_repeat('x', $x);
Which rolls up nicely into a one-liner that is completely comprehensible if you know that | is column, + is merge, < is reduce, and have the : represent the syntactic sugar for conserving repetitions of fn($x) into $x using a stable syntax that the reduce can also take advantage of: $result = $arr |> $x: ||> 'values' |+< $i: $x + $i |> str_repeat('x', $x);
Which reads as a nice simple sentence, since I grew up on Perl, that can be interpreted at a glance because it fits within a glance!So. I wouldn’t necessarily implement everything I can see possible here, because Perl proved that the space of people willing to parse symbols rather than words is not the complete programmer space. But I do stand by the helpfulness of the switch-like |=> as defined above =)
You might have $values and then you transform it into $b, $values2, $foo, $whatever, and your code has to be eternally vigilant that it never accidentally refers to $values or any of the intermediate variables ever again since they only existed in service to produce some downstream result.
Sometimes this is slightly better in languages that let you repeatedly shadow variables, `$values = xform1($values)`, but we can do better.
That it's hard to name intermediate values is only a symptom of the problem where many intermediate values only exist as ephemeral immediate state.
Pipeline style code is a nice general way to keep the top level clean.
The pipeline transformation specifically lets you clean this up with functions at the scope of each ephemeral intermediate value.
Anyone can write good or bad code. Avoiding new functionality and syntax won’t change that.
{
$foo = 'bar'; // only defined in this block
}
I use this reasonably often in Go, I wish it were a thing in PHP. PHP allows blocks like this but they seem to be noops best I can tell.A lot of people could say the same of the rest/spread syntax as well.
|> fn($x) => array_column($x, 'tags')
Why is that inlined function necessary? Why not just |> array_column(..., 'tags')
?I mean, I understand that it is because the way this operator was designed. But why?
This syntax is invalid. But it will be possible next year with the proposed partial function application rfc
array_column(?, 'tags')
[0]: https://tour.dlang.org/tour/en/gems/uniform-function-call-sy...
And there's nothing abnormal about pipes
$result = $arr
|> fn($x) => array_column($x, 'tags') // Gets an array of arrays
|> fn($x) => array_merge(...$x) // Flatten into one big array
|> array_unique(...) // Remove duplicates
|> array_values(...) // Reindex the array.
; // <- wtf
Ruby: result = arr.uniq.flatten.map(&:tags)
I understand this is not pipe operator, but just look at that character difference across these two languages.// <- wtf
This comment was my $0.02.
I'm thinking of something like this:
class Object
def |>(fn)
fn.call(self)
end
end
which then can be in the following way: result = arr
|> ->(a) { a.uniq }
|> ->(a) { a.flatten }
|> ->(a) { a.map(&:tags) }
Or if we just created an alias for then #then method: class Object
alias_method :|>, :then
end
then it can be used like in this way: arr
|> :uniq.to_proc
|> :flatten.to_proc
|> ->(a) { a.map(&:tags) }For example, in php 8.5 you’ll be able to do:
[1,1,2,3,2] |> unique
And then define “unique” as a constant with a callback assigned to it, roughly like:
const unique = static fn(array $array) : array => array_unique($array);
Much better.
It's the same reason PHP allows trailing commas in all lists.
Most PHP code I see look like your Ruby example.
- pipes make you realize how much song and dance you do for something quite simple. Nesting, interstitial variables, etc all obscuring what is in effect and very orderly set of operations.
- pipes really do have to be a first class operator of the language. I’ve tried using some pipe-like syntactic sugar in languages without pipes and while it does the job, a lot of elegance and simplicity is lost. It feels like you are using a roundabout thing and thus, in the end, doesn’t really achieve the same level of simplicity. Things can get very deranged if you are using a language in a way it wasn’t designed for and even though I love pipes I’ve seen “fake pipes” make things more complicated in languages without them.
But any changes making mainstream languages more functional are highly welcome! It’s just more ergonomic than imperative code.
bapak•6mo ago
wouldbecouldbe•6mo ago
EGreg•6mo ago
More like thenables / promises
wouldbecouldbe•6mo ago
bapak•6mo ago
cyco130•6mo ago
senfiaj•6mo ago
EGreg•6mo ago
In chaining, methods all have to be part of the same class.
In C++ we had this stuff ages ago, it’s called abusing streaming operators LMAO
bapak•6mo ago
Dots are not the same, nobody wants to use chaining like underscore/lodash allowed because it makes dead code elimination impossible.
pjmlp•6mo ago
boobsbr•6mo ago
lioeters•6mo ago
troupo•6mo ago
Keyword: almost. Pipes don't require you to have many different methods on every possible type: https://news.ycombinator.com/item?id=44794656
te_chris•6mo ago
Martinussen•6mo ago
chilmers•6mo ago
If people really this new syntax will make it harder to code in JS, show some evidence. Produce a study on solving representative tasks in a version of the language with and without this feature, showing that it has negative effects on code quality and comprehension.
robertlagrant•6mo ago
purerandomness•6mo ago
hajile•6mo ago
If I'm using a chained library and need another method, I have to understand the underlying data model (a leaky abstraction) and also must have some hack-ish way of extending the model. As I'm not the maintainer, I'm probably going to cause subtle breakages along the way.
Pipe operators have none of these issues. They are obvious. They don't need to track state past the previous operator (which also makes debugging easier). If they need to be extended, look at your response value and add the appropriate function.
Composition (whether with the pipe operator or not) is vastly superior to chaining.
lacasito25•6mo ago
let res res = op1() res = op2(res.op1) res = op3(res.op2)
type inference works great, and it is very easy to debug and refactor. In my opinion even more than piping results.
Javascript has enough features.
avaq•6mo ago
We wanted a pipe operator that would pair well with unary functions (like those created by partial function application, which could get its own syntax), but that got rejected on the premise that it would lead to a programming style that utilizes too many closures[0], and which could divide the ecosystem[1].
Yet somehow PHP was not limited by these hypotheticals, and simply gave people the feature they wanted, in exactly the form it makes most sense in.
[0]: https://github.com/tc39/proposal-pipeline-operator/issues/22... [1]: https://github.com/tc39/proposal-pipeline-operator/issues/23...
lexicality•6mo ago
avaq•6mo ago
All we're asking for is the ability to rewrite that as `2 |> Math.sqrt`.
What they're afraid of, my understanding goes, is that people hypothetically, may start leaning more on closures, which themselves perform worse than classes.
However I'm of the opinion that the engine implementors shouldn't really concern themselves to that extent with how people write their code. People can always write slow code, and that's their own responsibility. So I don't know about "silly", but I don't agree with it.
Unless I misunderstood and somehow doing function application a little different is actually a really hard problem. Who knows.
nilslindemann•6mo ago
avaq•6mo ago
The goal is to linearlize unary function application, not to make all code look better.
sir_eliah•6mo ago
dotancohen•6mo ago
You are going to have to deal with it as a mess at some point. One of the downfalls of perl was the myriad of ways of doing any particular thing. We would laugh that perl was a write only language - nobody knew all the little syntax tricks.
jeroenhd•6mo ago
Furthermore, I don't see why engines should police what is or isn't acceptable performance. Using functional interfaces (map/forEach/etc.) is slower than using for loops in most cases, but that didn't stop them from implementing those interfaces either.
I don't think there's that much of a performance impact when comparing
and especially when you end up writing code like when using existing language features.ufo•6mo ago
hajile•6mo ago
Loads of features have been added to JS that have worse performance or theoretically enable worse performance, but that never stopped them before.
Some concrete (not-exhaustive) examples:
* Private variables are generally 30-50% slower than non-private variables (and also break proxies).
* let/const are a few percent slower than var.
* Generators are slower than loops.
* Iterators are often slower due to generating garbage for return values.
* Rest/spread operators hide that you're allocating new arrays and objects.
* Proxies cause insane slowdowns of your code.
* Allowing sub-classing of builtins makes everything slow.
* BigInt as designs is almost always slower than the engine's inferred 31-bit integers.
Meanwhile, Google and Mozilla refuse to implement proper tail calls even though they would INCREASE performance for a lot of code. They killed their SIMD projects (despite having them already implemented) which also reduced performance for the most performance-sensitive applications.
It seems obvious that performance is a non-issue when it's something they want to add and an easy excuse when it's something they don't want to add.
tracker1•6mo ago
hajile•6mo ago
Record/tuple was killed off despite being the best proposed answer for eliminating hidden class mutation, providing deep O(1) comparisons, and making webworkers/threads/actors worth using because data transfer wouldn't be a bottleneck.
Pattern matching, do expressions, for/while/if/else expressions, binary AST, and others have languished for years without the spec committee seemingly caring that these would have real, tangible upsides for devs and/or users without adding much complexity to the JIT.
I'm convinced that most of the committee is completely divorced from the people who actually use JS day-to-day.
jedwards1211•6mo ago
int_19h•6mo ago
Only in function calls, surely? If you're using spread inside [] or {} then you already know that it allocates.
hajile•6mo ago
This applies to MOST devs today in my experience and doubly to JS and Python devs as a whole largely due to a lack of education. I'm fine with devs who never went to college, but it becomes an issue when they never bothered to study on their own either.
I've worked with a lot of JS devs who have absolutely no understand of how the system works. Allocation and garbage collection are pure magic. They also have no understanding of pointers or the difference between the stack and heap. All they know is that it's the magic that makes their code run. For these kinds of devs, spread just makes the object they want and they don't understand that it has a performance impact.
Even among knowledgeable devs, you often get the argument that "it's fast enough" and maybe something about optimizing down the road if we need it. The result is a kind of "slow by a thousand small allocations" where your whole application drags more than it should and there's no obvious hot spot because the whole thing is one giant, unoptimized ball of code.
At the end of the day, ease of use, developer ignorance, and deadline pressure means performance is almost always the dead-last priority.
nobleach•6mo ago
Most of the more interesting proposals tend to languish these days. When you look at everything that's advanced to Stage 3-4, it's like. "ok, I'm certain this has some amazing perf bump for some feature I don't even use... but do I really care?"
xixixao•6mo ago
Another angle is “how much rewriting does a change require”, in this case, what if I want to add another argument to the rhs function call. (I obv. don’t consider currying and point-free style a good solution)
WorldMaker•6mo ago
It may be useful data that the TC-29 proposal champions can use to fight for the F# style.
fergie•6mo ago
defraudbah•6mo ago
both are going somewhere and super popular though
77pt77•6mo ago
epolanski•6mo ago
Go explain them that promises already have a natural way to chain operations through the "then" method, and don't need to fit the pipe operator to do more than needed.