fizzle :: Int -> String
fizzle n
| fizz && baz = "Fizz Baz!" -- if this branch is taken, buzz is never evaluated
| fizz && buzz = "Fizz Buzz!"
| fizz = "Fizz!"
| buzz = "Buzz!"
| otherwise = show n
where
fizz = n `mod` 3 == 0
buzz = n `mod` 5 == 0
baz = n `mod` 7 == 0
The meat of the code was a one liner. But opening a file, throwing away the first line and parsing a CSV took a lot more head scratching, especially if you've only ever done puzzles in Haskell.
Makes the hard things easy and the easy things hard
line = decimal `sepBy` char ','
file = line `sepBy` newline
That's it. It almost reads like prose.It’s fun because the interviewer often doesn’t grok the paradigm. In one case, I was asked to make a password generator, and I went with QuickCheck’s Gen, which lets me write a random generator of things without explicitly defining a single function (they’re all monads).
So you go from somehow describing the thing you want to make to “I’m done.” :-D
I also refused to solve a part of the task because it would lead to bad password policy. Instead I property-tested the generator.
What is the order of sumNToTotal?
This is just a beginner's Haskell post. If an interviewer really meant to test optimization knowledge, I would hope they wouldn't ask me to optimize one O(n) utility function into another.
> While it's true that sort has an O(nlogn) performance profile, one interesting thing here is that sorting is lazy in Haskell! This means that if our two strings are unequal, they will only be sorted far enough to determine inequality!
I don't understand exactly how this works - don't all O(n log n) sorting algorithms still have to do some work on the other elements of the list before pulling out the smallest one? If you just scanned and pulled the smallest one to the front each time that is O(n^2). And there's an implementation detail here that differences at the start of the sorted string will be caught more quickly than at the end, so real-world behaviour is dependent on your choice of sort order.
Agreed that the performance analysis is quite superficial and that some of the solutions like sum NToTotal must be very sub-optimal.
Thus the whole thing is O(n²). It's not like this is particularly hard for Haskell. If anything, it's often easier because we can set up the recurrence by analogy with the definition.
(the final filter sums k elements nCr(n, k) times as well)
That said: if I ever gave the palindrome question to a candidate (I haven't) and they pulled out a `reverse` library func/keyword we'd be encouraging them to do more.
Well, it's not clear to me why we're terming these as silly. They mostly seem simple, instead, to me.
pensatoio•6h ago
tikhonj•6h ago
rhdjsjebshjffn•6h ago
yakshaving_jgt•4h ago
rrradical•6h ago
sgarland•5h ago
RHSeeger•5h ago
StopDisinfo910•5h ago
The article also features examples of point-free style, another unfortunate trend for readability.
As long as you use operators sparingly, don’t abuse partial application and prefer explicit lambdas to composition, Haskell is fairly readable. The issue is that approximately no Haskeller writes Haskell this way.
Skeime•2h ago
(For example, (\x -> x ++ y) and (\y -> x ++ y) look pretty similar to me at first glance, but (++y) and (x++) are immediately distinguishable.)
Of course, this is reliant on knowing the operators but that seems like a mostly orthogonal issue to me: You still need to know the operator in the lambda expression. That said, the niceness of sections gives people yet another reason to introduce operators for their stuff when arguably they already are too prevalent.
sshine•1h ago
kqr•5h ago
It is also an operator, meaning it can be used with infix notation, as in (x : xs). Haskell has something called operator sections, where if one supplies only one of the arguments to an operator it will return a function expecting the other argument. In other words
and This can be used as in this article, to create a function that prepends x to any list. Another common example is (1+) which increments any number it is given, or (:[]) which turns any value into a one-element list.It can also be used much more cleverly -- especially considering that any two-argument function can be turned into an operator with backticks -- but then (in my opinion) readability starts to suffer.
shakna•4h ago
djtango•3h ago
sshine•1h ago
CamperBob2•5h ago
chpatrick•5h ago
CamperBob2•5h ago
vostok•5h ago
CamperBob2•5h ago
coolcase•5h ago
Joel_Mckay•4h ago
With C, any claim one makes about repeatability is always wrong at some point depending on the version compliance.
I like C, but Haskell is a happy optimistic syntax... Julia is probably the language I'd wager becoming more relevant as Moore's laws corpse begins to stink. =3
unsnap_biceps•5h ago
tayo42•4h ago
they saying its unnecessary because if you go in order you first print "fizz", then print "buzz" which will always print "fizz buzz" for the equivalent of " mod 15" you don't need a special string that like.
the "if (m3 || m5)" is just printing a newline because under that condition you printed something earlier.
hnlmorg•3h ago
kqr•5h ago
I've always liked this solutiin, which avoids that: https://archive.is/KJ39B
This allows you to add special prints by adding just the one line of code, changing nothing else.Glyptodon•4h ago
YZF•4h ago
internet_points•12m ago
90s_dev•5h ago
sponnath•5h ago
90s_dev•4h ago
Darmani•4h ago
Is there a way to think of proofs as being lazy? Yes, but it's not what you think. It's an idea in proof theory called polarization. Some parts of a proof can be called positive or negative. Positive parts roughly correspond to strict, and negative roughly correspond to lazy.
To explain a bit more: Suppose you want to prove all chess games terminate. You start by proving "There is no move in chess that increases the number of pieces on the board." This is a lemma with type `forall m: ChessMove, forall b: BoardState, numPieces b >= numPieces (applyMove m b)`. Suppose you now want to prove that, throughout a game of chess, the amount of material is decreasing. You would do this by inducting over the first lemma, which is essentially the same as using it in a recursive function that takes in a board state and a series of moves, and outputs a proof that the final state does not have more material than the initial state. This is compact, but intrinsically computational. But now you can imagine unrolling that recursive function and getting a different proof that the amount of material is always decreasing: simply write out every possible chess game and check. This is called "cut elimination."
So you can see there's a sense in which every component of a proof is "executable," and you can see whether it executes in a strict or lazy manner. Implications ("If A, then B") are lazy. Conjuctions ("A and B") can be either strict or lazy, depending on how they're used. I'm at the edge of my depth here and can't explain more -- in honesty, I never truly grokked proof polarization.
Conversely, in programming languages, it's not strictly accurate to say that the C program is strict and the Haskell program is lazy. In C, function definitions and macro expansions are lazy. You can have the BAR() macro create a #error, and yet FOO(BAR()) need not create a compile error. In Haskell, bang patterns, primitives like Int#, and the `seq` operator are all strict.
So it's not the case that proofs are lazy and C is strict and Haskell is lazy so it's more like a proof. It's not even accurate to say that C is strict and Haskell is lazy. Within a proof, and within a C and a Haskell program, you can find lazy parts and strict parts.
bcrosby95•4h ago
djtango•3h ago
Learn paradigms not languages...