I think the blog post does a good job describing the idea of Selective ("finite-case" etc.) but for me it falls apart shortly afterwards. If I was writing it, from what I understood I would start with the overview, then describe `CaseTree`, and then go into what abstractions this is an instance of.
As a small example of how I think the writing could be improved, take this sentence:
"This is in contrast to applicative functors, which have no “arrow of time”: their structure can be dualized to run effects in reverse because it has no control flow required by the interface."
This uses jargon where it's not necessary. There is no need to mention duality, and the "arrow of time" isn't very helpful unless you've had some fairly specific education. I feel it's sufficient to say that applicatives don't represent any particular control-flow and therefore can be run in any order.
I believe there's a minor error in the "Tensorful" section. When describing that `CaseTree` is a profunctor, the type of the contravariant map over `CaseTree` is written as `(i' -> i) -> CaseTree f i r -> CaseTree i' f r`. I believe the last term should be `CaseTree f i' r`
In Haskell, there's a lot of desire to be able to write effectful code as you normally would, but with different types to do things like restrict the available actions (algebraic effects) or do optimizations like batching. The approaches generally used for this (Free Monads) do this by producing a data structure kind of like an AST; Haskell's "do" notation transforms the sequential code into Monadic "bind" calls for your AST's type (like turning .then() into .flatMap() calls, if you're from Javascript), and then the AST can be manipulated before being interpreted/executed. This works, but it's fundamentally limited by the fact that the "bind" operation takes a callback to decide what to do next - a callback is arbitrary code - your "bind" implementation can't look into it to see what it might do, so there's no room to "look ahead" to do runtime optimization.
Another approach is to slide back to something less powerful than Moands, Applicative Functors, where the structure of the computation is known in advance, but the whole point of using Monads is that they can decide what to do next based on the runtime results of the previous operation - that they accept a callback - so by switching to Applicatives, by definition you're giving up the ability to make runtime choices like deciding not to run a query if the last one got no results.
Selective Functors were introduced as a middle ground - solidifying the possible decisions ahead of time, while still allowing decisions based on runtime information - for example, choosing from a set of pre-defined SQL queries, rather than just running a function that generates an arbitrary one.
munchler•1mo ago
The basic idea, as I understand it, is a typeclass that is more powerful than an applicative, but still less powerful than a monad, called a "selective applicative". I would summarize them like this:
* Applicative: Fixed computation graph, but no conditional structure.
* Selective applicative: Fixed computation graph with some conditional "branches".
* Monad: Dynamic computation graph and control flow, can generate new structure on the fly.
I'm sure I'm still missing a lot, but I think that's the 10,000 foot view.
Bjartr•1mo ago
It started making more sense though when I managed to fully understand the AST comparison that was being made. Specifically, this approach lets you do the LISPy "code is data" thing where you can construct your program within your program and then run it, but instead does it via "data is execution+control-flow". Thus gaining the benefits of static analysis on the constructed program since you wrote it all out in the "normal"/static order rather than the "nested"/dynamic view of a program that monads give.
At least that's the gist I got, though take it with a grain of salt, the article went very over my head at times.
discarded1023•1mo ago
throwaway17_17•1mo ago
In the article, author is pointing out that the selective applicative doesn’t seem to work correctly (in a categorical sense) for functions, but when generalizing to profunctors a near semi-ring structure appears and works for the SApplicative.
I am pretty sure I’m reading TFA correctly here, but I’ll check when off mobile and edit if I still can.
internet_points•1mo ago