But there’s something beautiful in the way that a Taylor expansion or a trigonometric identity emerge from the function definition. Also, it teaches an interesting concept in lazy evaluation.
I mean, why not write straight up assembler? That would be even more efficient…
I implemented the same thing myself in F#. [1]
[0]: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...
The math is a bit over my head, but this formulation seems more difficult than the one I'm familiar with. For example, x^2 is represented as 0::0::2 instead of 0::0::1 (because 2! = 2) and x^3 is represented as 0::0::0::6 instead of 0::0::0::1 (because 3! = 6). Is there a benefit to that?
If you want to code it Haskell, have you considered using Haskell?
They are pretty niche but when you find a situation that needs them they're really elegant. I've used them for generating deterministic test stimulus and it's really nice.
Nowadays I mostly work with python ML code that has to be exported as TorchScript, thus I’m very sensitive to things that don’t work there. Not per se a problem with python - but having rewritten a lot of code to make it work, pretty much each and every time I found the explicit, imperative rewrite much cleaner and easier to follow and understand
The point I wanted to make is that using generator, in particular like here, is something that I consider ugly and difficult to maintain and it will probably have to be rewritten when trying to export to TorchScript. I really do not see how “just get a hang for it” can help me reevaluate my perspective
Edit: Maybe you were hung up on my “standardised” - I have to admit I do not know how thoroughly the PEP defines generators and if really all edge cases are defined without needing to check the python source code. From past experiences, my trust in python language standards is a bit shaky, as it had been difficult to reproduce the very exact behaviour using a different language, or python features - without requiring digging through the sources.
f = x -> raise if
x :: int => IndexError x
otherwise => ValueError x
Complete with pipelines, of course: "> {}: {}".format "Author" "stop using stale memes"
|> print
The reason this all works is that generators plus memoization is "just" an implementation of the lazy sequences that Haskell has built in.
For me, this isn't intuitive. It works; however, it doesn't scream recursion to me.
def ints():
yield 1
yield from map(lambda x: x + 1, ints())
I preferred def ints():
cnt = 1
while True:
yield cnt
cnt += 1
def ints():
yield 1
yield from map(lambda x: x + 1, ints())
Surely it would always yield a stream of `1`s? Seems very weird to my brain. "As simple as that" it is not!For the second item, we grab the first item from ints(), and then apply the map operation, and 1+1 is 2.
For the third item, we grab the second item from ints(), and then apply the map operation, and 1+2 is 3.
The extra cost is not in the recursive calls, of which there is only one per returned number. The cost is in achieving a yielded value n by starting with 1 and adding 1 to it (n-1) times. The given Haskell code:
ints = 1 : map (+1) ints
has the exact same problem, and it's just as quadratic. It might have a somewhat better constant factor, though (even apart from Python being interpreted) because there's less function calls involved.
Your code didn't show a quadratic blowup in the timing:
main = print . sum $ take 1000000 ints
ints = 1 : map (+1) ints
500000500000
real 0m0.022s
user 0m0.021s
sys 0m0.000s
What's happening, if I'm not mistaken, is that the unevaluated tail of the list is at all times a thunk that holds a reference to the cons cell holding the previous list item. Hence this is more like `iterate (+1) 1` than it seems at first glance.
Actually, it's even simpler than that: the $ operator is nothing but a function that applies its left argument to its right one! The full definition is
f $ x = f x
(plus a directive that sets its precedence and association)As to searchability, this should be covered in whatever learn Haskell material you are using. And if it isn't, then you can literally just search for it in the Haskell search engine [0].
[0] https://hoogle.haskell.org/?hoogle=%24&scope=set%3Astackage
In general, after so many decades programming, I've come to dislike languages with optional periods (a.foo) and optional parenthesis for function calls - little gain to a confusion of precedence, what's a field vs method. Seems that the whole DSL craze of 15 years ago was a mistake after all.
Having said all that, I think haskell is awesome, in the original sense of the word. I became a better programmer after working with it for a bit.
Alternatives that aren't dumb:
for x in range(2**256): # not infinite but go ahead and run it to the end and get back to me
from itertools import repeat
for x, _ in enumerate(repeat(None)): # slightly more annoying but does work infinitely
Granted these aren't clever or non-performant enough to excite functional code fanboys.When working with other engineers, I’ve learned to be careful: sometimes it’s better to just materialize things into a list for clarity, even if it’s less “elegant” on paper.
There’s a real balance between cleverness and maintainability here.
whalesalad•9h ago