I think IPC via HTTP, gRPC, Kafka, files, etc allows language decoupling pretty well. Intra-process communication is primarily single-language, though you can generally call from language X into C-language libs. Cross-process, I don't see where the assertion comes from.
It will certainly do that if the buffer is full.
prevents the implicit blocking
No, that's exactly the case of implicit blocking mentioned above.
Does anyone else find this article rather AI-ish? The extreme verbosity and repetitiveness, the use of dashes, and "The limitation isn't conceptual—it's syntactic" are notable artifacts.
prog1 -input input_file -output tmp1_file
prog2 -input tmp1_file -output tmp2_file && del tmp1_file
prog3 -input tmp2_file -output tmp1_file && del tmp2_file
...
progN -input tmpX_file -output output_file && del tmpX_file
is more in line with the author's claimed benefits of the pipes than the piped style itself. The process isolation is absolute, they are separated not just in space, but in time as well, entirely!You can consider that an OS/resource specific limitation, rather than a limitation in the concept.
After reading the whole thing, yes! Specifically it feels incoherent in the way AI text often is. It starts by praising unix pipes for their simple design and the explicit tradeoffs they make, and then proceeds explaining how we could and should make the complete opposite set of tradeoffs.
Also viewing Unix pipes as some special class of file descriptor because your Intro to OS professor didn't teach you anything more sophisticated than shell pipe syntax is kinda dumb.
File descriptor-based IPC has none of the restrictions discussed in this article. They're not restricted to text (and the author does point this out), they're not restricted to linear topologies, they work perfectly fine in parallel environments (I have no idea what this section is talking about), and in Unix-land processes and threads are identically "heavy" (Windows is different).
For instance sqrt(sin(cos(theta))) can be notated < theta | cos | sin | sqrt.
Pipeline syntax implemented in functional languages expands into chained function invocation.
Everything follows from that: what we know about combining functions applies to pipes.
> When cat writes to stdout, it doesn't block waiting for grep to process that data.
That says nothing more than that nested function invocations admit non-strict evaluation strategies. E.g. the argument of a function need not be reduced to a value before it is passed to another, which can proceed with a calculation which depends on that result before obtaining it.
When you expand the actual data dependencies into a tree, it's obvious to see what can be done in parallel.
Fanout has precisely zero dependency on GC. For example ‘tee’ has been around for decades and it can copy io streams just fine.
There has been some effort to built fanout shells too. With a discussion in HN earlier this month on one called dgsh https://news.ycombinator.com/item?id=45425298
Edit: I agree with other comments that this feels like AI slop
rajiv_abraham•1mo ago