There are several things I disagree with regarding Haskell but it's understandable given that this is OP's first time using the language (like a "monad's internal state"), but I want to highlight one particular observation:
> This uncertainty of time of evaluation also makes catching errors difficult, because calling catch on the function that throws the error will not necessarily catch that error
It's important to distinguish between imprecise exceptions (ex. calls to `error `, `undefined`, and the like) and synchronous exceptions (async exceptions are not important for the article).
> Catch must be called on the function that forces evaluation on that error. This is something that is hard to trace, and something that types don’t help much with.
The types are actually the most important part here! Synchronous exceptions cannot be thrown by pure code (as long as we're not dealing with escape hatches like `unsafePerformIO`), while IO code can throw and catch all kind of exceptions .
https://hackage.haskell.org/package/bluefin-0.0.16.0/docs/Bl...
When the dude uses `foldl` over lists and `foldr` with `(*)` (numeric product) it is not the language that's the lost cause.
What's the preffered approach?
`foldr` I almost never use, but it would be for: the return value is a lazy list and I will only need to evaluate a prefix.
https://h2.jaguarpaw.co.uk/posts/foldl-traverses-state-foldr...
In short: use foldl’ when you’re iterating through a list with state. foldr can be used for anything else, but I recommend it for experts only. For non-experts I expect it’s easier to use for_. For more details about that see my article “Scrap your iteration combinators”.
https://h2.jaguarpaw.co.uk/posts/scrap-your-iteration-combin...
This is a great example of Haskell's community being toxic. The author clearly mentioned they're new to the language, so calling them a "lost cause" for making a beginner mistake is elitist snobbery.
I usually don't point these things out and just move on with my life, but I went to a Haskell conference last year and was surprised that many Haskell proponents are not aware of the effects of this attitude towards newcomers.
I wonder how do you call the practice of complete beginners spreading FUD and suggesting to their readers that something in the language is "a lost cause", all whilst having neither enough knoweldge nor sufficient practice to make assumptions of this kind.
> This is a great example of Haskell's community being toxic
To be clear: I don't represent haskell community, I'm not part of it, and I couldn't care less about it. It just so happened that I saw the author inflating their credentials at the expense of the language via spreading FUD, that the beginners you seem to care about are susceptible to, and I didn't like it.
If you get triggered by the expressed dissatisfaction with the author's unsubstantiated presumptuousness, reflected back at them in a style and manner they allowed themselves to talk about the thing they don't know about, then it's purely on you and your infantilism.
(Still, hopefully in this case it's clear from instig007's reply that it's not a member of the Haskell community behaving in that way.)
Newcomer or not, it is natural for people online to criticize wrong opinions.
https://www.jsoftware.com/ioj/iojATW.htm
https://github.com/kparc/ksimple/blob/main/ref/a.c
Slightly less factiously, the ksimple repository has a version with comments.
https://github.com/kparc/ksimple
https://github.com/kparc/ksimple/blob/main/a.c
Note, these aren't APL, but they are in the same family of array languages.
APL isn't really one of these exhibitions of computational simplicity in the way of the languages you mention. It's inventor, Kenneth Iverson, was more focused on the human side of thinking in and using the language.
Forth, Lisp, et al are quite easy to implement, but they require considerable library layers on top to make them useful for expressing application-level logic, even if we just focus on the pure functions. APL, on the other hand, has a larger core set of primitives, but you're then immediately able to concisely express high-level application logic.
Are you looking for a kind of reference implementation for learning purposes? If so, I'd say the best route is just go with the docs. Arguably the Co-dfns compiler is a precise spec, but it's notably alien to non-practitioners.
For non-event driven systems, the short story is to organize application state as a global database of inverted tables and progressively normalize them such that short APL expressions carry the domain semantics you want.
For event driven systems, we have token enumeration over state machines, which can be expressed as literal Branches to state blocks.
Granted, the above likely doesn't communicate well unless you're already primed with all the necessary ideas. If you're interested, I'm willing to chat. Email is in my profile description.
source: Current day-to-day is greenfield APL dev.
It is the one that always trips me up. I just started in on my third attempt at learning APL/array languages, both previous times I got to the same point where I was mostly at a loss of how to apply it all. So I move on and forget all that I learned until the next time and have to start over. Thankfully the last attempt seems to have mostly stuck as far as keyboard layout goes, makes progress much quicker.
I may take you up on that email offer once I get a bit further along in this current attempt, can't quite form reasonable questions yet and only know what tripped me up in the previous attempts. I believe you are a regular on APL Orchard? I will be joining it soon so perhaps will see you there.
In the off chance you've not see this: https://aplwiki.com/wiki/Chat_rooms_and_forums
>organize application state as a global database of inverted tables and progressively normalize them such that short APL expressions carry the domain semantics you want.
I think most sizable stuff is proprietary. I implemented an lsp in an open source K which uses json/rpc. But the open source K is probably best considered a hobby project.
https://github.com/gitonthescene/ngnk-lsp/blob/kpath/k/lsp.k
You might consider joining one of the APL forums if you haven’t already.
But first, "database" here is just the list of global variables. If that wasn't obvious, it is important.
My application processes weblogs, and "clicks" is the bit-array of which event was a click (redirect), as opposed to having a variable called "weblog" which contains a list of records each possibly having an event-type field.
Normalizing them that acknowledges that the weblog (input) probably looked like the latter, but it's easier to do work on the former. APL makes that transformation very easy: Just rotate the table. In k this is flip. Simples.
Those "domain semantics" are simply the thing I want to do with "clicks" which in my application comes from the business (users), so I want them to provide a bunch of rules to do that.
Now with that in mind, take a look here:
https://code.jsoftware.com/wiki/Vocabulary/bdot#bitwise
and here:
http://www.ai.mit.edu/projects/iiip/doc/CommonLISP/HyperSpec...
Look carefully at the tables, because they're given in a slightly different way, but they are the same. And these are all of them, so it should be obvious at this point you can represent any boolean operation against any matrix of variables with a matrix of these numbers.
For example, you might have a sql-like expression (from a user of your application) of x=y, and x<y and so on that you want to use to filter some data set.
Now if you are to think about how you might do this in Javascript (for example; probably in scheme too) you would probably organise such a table as a chain of closures. And I think if you look at any ORM in just about any language you'll see this kind of pattern (maybe they use classes or use a tree, but these are obviously the same as closures), but such a tree can only be traversed, and the closure can only be called. Maybe you can batch or shard, but that's it, and since I would bet there are a lot of dependant loads/branching in this tree of closures, it is slow.
But if you understand that this tree is also a matrix of boolean operators, it is obviously parallelisable. And each operation is simple/cache-friendly and so therefore fast. This leads to your "queries" being a set of projected indexes or bitmaps (whichever is convenient), which you probably also store in global variables someplace (because that is convenient) while you're doing what you need to do (make xml, json, bar charts, run programs, whatever)
The expansion is mechanical and thus not really at attempt at readability.
As such there's really no pretty "core" that pulls its weight to implement in 1000LoC and is useful for much.
Here's a simple minimal APL parser in JS that I wrote once to display one way of parsing APL: https://gist.github.com/dzaima/5130955a1c2065aa1a94a4707b309...
Couple that with an implementation of whatever primitives you want, and a simple AST walker, and you've got a simple small APL interpreter. But those primitive implementations already take a good chunk of code, and adding variables/functions/nested functions/scoping/array formatting/etc adds more and more bits of independent code.
Perhaps if you accept defining bits in the language in itself via a bootstrap step, BQN is a good candicate for existing small implementations - a BQN vm + minimal primitive set is ~500LoC of JS[0] (second half of the file is what you could call the native components of a stdlib), 2KLoC for first public commit of a C impl[1], both of those having the rest of the primitives being self-hosted[2], and the compiler (source text → bytecode) being self-hosted too[3]. (the C impl also used r0.bqn for even less required native primitives, but modern CBQN uses very little of even r1.bqn, having most important things native and heavily optimized)
[0]: https://github.com/mlochbaum/BQN/blob/master/docs/bqn.js though earlier revisions might be more readable
[1]: https://github.com/dzaima/CBQN/tree/bad822447f703a584fe7338d...
[2]: https://github.com/mlochbaum/BQN/blob/master/src/r1.bqn (note that while this has syntax that looks like assigning to primitives, that's not actual BQN syntax and is transpiled away)
On a semi-related topic: I tried learning Haskell this past weekend out of curiosity that I last tried it some 10+ years ago while still in college.
I found resources for it scant. Coming from more modern languages/tooling like Go/Rust, I also struggled quite a bit with installation and the build/package system.
I tried the stack template generator for yesod/sqlite and after some 15 minutes of it installing yet another GHC version and building, I eventually ctrl+C'd and closed out of the window.
Maybe this was a unique experience, but I'd love some guidance on how to be successful with Haskell. I've primarily spent most of my professional years building web services, so that was the first place I went to. However, I was taken aback by how seemingly awful the setup and devex was for me. I've always been interested in functional programming, and was looking to sink my teeth in to a language where there is no other option.
Stack builds on top of Cabal, and used to solve a bunch of problems, but the reasons for it's existence are no longer super relevant. It still works totally fine if that's your thing though.
As for being successful, there are several nice books, and several active forums. I've gotten good answers on the Libera IRC network #haskell channel, and on the Haskell matrix channel #haskell:matrix.org
If you want to get started without installing anything, there's the exercism track: https://exercism.org/tracks/haskell
I've heard good things about Brent Yorgey's Haskell course ( https://www.cis.upenn.edu/~cis1940/spring13/lectures.html ) but haven't tried it myself.
Do you think you would have benefitted from a resource like the Rust book? I've been toying with the idea of writing something similar and donating it to the Haskell Foundation
This AoC solution is, indeed, quite the functionista! Better yet, it leans heavily into point-free expressions. The style is pretty popular amongst APL language enthusiasts and puzzlers.
That said, you actually see quite different APL styles in the wild:
- Pointed declarative style, also a popular with functional programmers (e.g. anything like this[0] from the dfns workspace)
- Imperative, structured programming, very common in legacy production systems (e.g. this[1] OpenAI API interface)
- Object-oriented, also common in somewhat newer production environments (e.g. the HTTP interface[2])
- Data-parallel style (e.g. Co-dfns[3])
Heck, APL even has lexical and dynamic scope coexisting together. IMHO, it's truly underrated as a language innovator.
[0]:https://dfns.dyalog.com/c_match.htm
[1]:https://github.com/Dyalog/OpenAI/blob/main/source/OpenAI.apl...
[2]:https://github.com/Dyalog/HttpCommand/blob/master/source/Htt...
[3]:https://github.com/Co-dfns/Co-dfns/blob/master/cmp/PS.apl
i think the first raku (perl6) parser (pugs) was written in Haskell, certainly all the team learned Haskell before they started
behnamoh•19h ago