frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Introducing tmux-rs

https://richardscollin.github.io/tmux-rs/
666•Jtsummers•13h ago•215 comments

Flounder Mode – Kevin Kelly on a different way to do great work

https://joincolossus.com/article/flounder-mode/
198•latentnumber•13h ago•41 comments

Launch HN: K-Scale Labs (YC W24) – Open-Source Humanoid Robots

157•codekansas•12h ago•83 comments

AV1@Scale: Film Grain Synthesis, The Awakening

https://netflixtechblog.com/av1-scale-film-grain-synthesis-the-awakening-ee09cfdff40b
182•CharlesW•12h ago•147 comments

Wind Knitting Factory

https://www.merelkarhof.nl/work/wind-knitting-factory
98•bschne•8h ago•25 comments

Manipulating trapped air bubbles in ice for message storage in cold regions

https://www.cell.com/cell-reports-physical-science/fulltext/S2666-3864(25)00221-8
48•__rito__•3d ago•12 comments

Peasant Railgun

https://knightsdigest.com/what-exactly-is-the-peasant-railgun-in-dd-5e/
213•cainxinth•14h ago•157 comments

Poor Man's Back End-as-a-Service (BaaS), Similar to Firebase/Supabase/Pocketbase

https://github.com/zserge/pennybase
157•dcu•13h ago•97 comments

Electronic Arts Leadership Are Out of Their Goddamned Minds

https://aftermath.site/ea-dice-battlefield-battle-royale-free-to-play-f2p
22•dotmanish•1h ago•13 comments

Ubuntu 25.10 Raises RISC-V Profile Requirements

https://www.omgubuntu.co.uk/2025/06/ubuntu-riscv-rva23-support
81•bundie•2d ago•23 comments

Sound Chip, whisper me your secrets [video]

https://media.ccc.de/v/gpn23-302-sound-chip-whisper-me-your-secrets-
10•rasz•2d ago•0 comments

Opening up ‘Zero-Knowledge Proof’ technology

https://blog.google/technology/safety-security/opening-up-zero-knowledge-proof-technology-to-promote-privacy-in-age-assurance/
249•doomroot13•11h ago•151 comments

Where is my von Braun wheel?

https://angadh.com/wherevonbraunwheel
131•speckx•15h ago•98 comments

Caching is an abstraction, not an optimization

https://buttondown.com/jaffray/archive/caching-is-an-abstraction-not-an-optimization/
94•samuel246•2d ago•79 comments

CO2 sequestration through accelerated weathering of limestone on ships

https://www.science.org/doi/10.1126/sciadv.adr7250
36•PaulHoule•5h ago•27 comments

Postcard is now open source

https://www.contraption.co/postcard-open-source/
92•philip1209•12h ago•29 comments

High-Fidelity Simultaneous Speech-to-Speech Translation

https://arxiv.org/abs/2502.03382
73•Bluestein•8h ago•40 comments

You are what you launch: how software became a lifestyle brand

https://omeru.bearblog.dev/lifestyle/
40•freediver•2d ago•33 comments

An Algorithm for a Better Bookshelf

https://cacm.acm.org/news/an-algorithm-for-a-better-bookshelf/
80•pseudolus•2d ago•12 comments

Fei-Fei Li: Spatial intelligence is the next frontier in AI [video]

https://www.youtube.com/watch?v=_PioN-CpOP0
264•sandslash•2d ago•137 comments

Converge (YC S23) well-capitalized New York startup seeks product developers

https://www.runconverge.com/careers
1•thomashlvt•7h ago

Encoding Jake Gyllenhaal into one million checkboxes (2024)

https://ednamode.xyz/blogs/2.html
54•chilipepperhott•12h ago•13 comments

White House claims expansive power to nullify TikTok ban and other laws

https://www.nytimes.com/2025/07/03/us/politics/trump-bondi-tiktok-executive-power.html
4•ytpete•13m ago•1 comments

(Experiment) Colocating agent instructions with eng docs

https://technicalwriting.dev/ai/agents/colocate.html
6•dannyrosen•3d ago•2 comments

AI for Scientific Search

https://arxiv.org/abs/2507.01903
94•omarsar•13h ago•22 comments

Show HN: I rewrote my notepad calculator as a local-first app with CRDT syncing

https://numpad.io
30•tonyonodi•3d ago•12 comments

Michael Madsen has died

https://www.nytimes.com/2025/07/03/movies/michael-madsen-dead.html
96•anigbrowl•6h ago•28 comments

Astronomers discover 3I/ATLAS – Third interstellar object to visit Solar System

https://www.abc.net.au/news/science/2025-07-03/3i-atlas-a11pl3z-interstellar-object-in-our-solar-system/105489180
290•gammarator•1d ago•161 comments

About AI Evals

https://hamel.dev/blog/posts/evals-faq/
172•TheIronYuppie•3d ago•40 comments

Stalking the Statistically Improbable Restaurant with Data

https://ethanzuckerman.com/2025/07/03/stalking-the-statistically-improbable-restaurant-with-data/
60•nkurz•11h ago•31 comments
Open in hackernews

Haskelling My Python

https://unnamed.website/posts/haskelling-my-python/
172•barrenko•2mo ago

Comments

whalesalad•2mo ago
Generators are one of my favorite features of Python when used in this way. You can assemble complex transformation pipelines that don’t do any actual work and save it all for one single materialization.
nurettin•2mo ago
Excited to read their next blog where they discover functools.partial and return self.
nickpsecurity•2mo ago
I just discovered one from your comment. Thank you!
BiteCode_dev•2mo ago
Or, you know, use numpy.
shash•2mo ago
Yeah, that’s much more efficient.

But there’s something beautiful in the way that a Taylor expansion or a trigonometric identity emerge from the function definition. Also, it teaches an interesting concept in lazy evaluation.

I mean, why not write straight up assembler? That would be even more efficient…

brianberns•2mo ago
This idea comes from a functional pearl called "Power Series, Power Serious" [0], which is well worth reading.

I implemented the same thing myself in F#. [1]

[0]: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

[1]: https://github.com/brianberns/PowerSeries

abeppu•2mo ago
Huh, there must have been something in the water leading up to this. Also from 1998 is this paper, "Calculus in coinductive form" and neither of these cites the other. https://ieeexplore.ieee.org/document/705675
brianberns•2mo ago
These are indeed very similar. Thanks for the link!

The math is a bit over my head, but this formulation seems more difficult than the one I'm familiar with. For example, x^2 is represented as 0::0::2 instead of 0::0::1 (because 2! = 2) and x^3 is represented as 0::0::0::6 instead of 0::0::0::1 (because 3! = 6). Is there a benefit to that?

barrenko•2mo ago
I was introduced to the notion of power series two weeks ago, and now it's seemingly everywhere...
coliveira•2mo ago
Power series have been everywhere for 200 years!
barrenko•2mo ago
I am learning that indeed.
angra_mainyu•2mo ago
Power series are possibly one of the most powerful tools in analysis.
fp64•2mo ago
This is one of the reasons I hate python, it allows for so many weird things that are only borderline standardised, if at all. When I started with python decades ago, I was also all hype, list comprehensions and functional patterns everywhere, and why not monkey patch a third party library on runtime? Now I regularly see such code written by the new junior devs, painful to maintain, painful to integrate into a larger code base, most certainly failing with TorchScript, and plainly convoluted for the sake of appearing smart.

If you want to code it Haskell, have you considered using Haskell?

sanderjd•2mo ago
There is nothing weird or "borderline standardized" about generators...
fp64•2mo ago
Fair point, they have a proper PEP. I’ve been burnt by `lst[::-1]` and the likes which TorchScript does not (did not?) support and to my surprise learned that the standard is not clear on this behaviour. Generators are fine but I somewhat fail to see their use as they move state somewhere not really clear and I have seen plenty of problems with needing `list(generator())` confusing people
ltbarcly3•2mo ago
I wrote a whole response thinking you were like 19 but you say you started with Python decades ago. If you still haven't gotten the hang of it I don't think there's anything to say. Generators have been in python for more than 2 decades so I find your claims extremely suspect.
fp64•2mo ago
I’m using python since version 1 my friend, no need to get personal and insulting. I gave an example of something that is not defined properly in a PEP as well. I really do not like python, and one of the reasons is that is encourages writing of what I believe is code that is harder to understand and harder to maintain for people who didn’t write it. There’s nothing wrong with Haskell, but it has a rather steep learning curve when you’re not coming from functional programming and if you embrace patterns like this you put extra burden on your colleagues.
IshKebab•2mo ago
I also really dislike Python but actually their implementation of generators is pretty good IMO. Few languages have it, e.g. Rust still doesn't have generators.

They are pretty niche but when you find a situation that needs them they're really elegant. I've used them for generating deterministic test stimulus and it's really nice.

fp64•2mo ago
That’s maybe my point, maybe not. Generators can be useful, but somewhat niche as you said. I still prefer explicit code and try to avoid generators, I can’t remember the last time I wrote a yield. I disagree on their elegance. From my earlier days in python I remember writing a lot of convoluted things that felt smart at the time but turned out stupid, and I think representing infinite list with recursive generators in python is exactly that. Maybe a fun exercise to show what’s possible, but from painful experience I am quite certain someone will read articles like these and apply it everywhere without fully understanding it. I loved perl and how terse and expressive I could be, but after a little while of not using it (hah, python came to my life!) I had no clue anymore what my code was supposed to mean, and the same happened later to my “oh so elegant” python code eventually.

Nowadays I mostly work with python ML code that has to be exported as TorchScript, thus I’m very sensitive to things that don’t work there. Not per se a problem with python - but having rewritten a lot of code to make it work, pretty much each and every time I found the explicit, imperative rewrite much cleaner and easier to follow and understand

fp64•2mo ago
I am not sure what your argument is supposed to be. That I am wrong because I haven’t gotten the hang of it after all these years? I do understand generators. For a long time. I still believe they are usually an anti-pattern, same like massive abuse of inheritance where you have 20 classes but only a single instance but you made your code “future proof”.

The point I wanted to make is that using generator, in particular like here, is something that I consider ugly and difficult to maintain and it will probably have to be rewritten when trying to export to TorchScript. I really do not see how “just get a hang for it” can help me reevaluate my perspective

Edit: Maybe you were hung up on my “standardised” - I have to admit I do not know how thoroughly the PEP defines generators and if really all edge cases are defined without needing to check the python source code. From past experiences, my trust in python language standards is a bit shaky, as it had been difficult to reproduce the very exact behaviour using a different language, or python features - without requiring digging through the sources.

notpushkin•2mo ago
Absolutely unrelated, but there’s a Haskell-like syntax for Python: https://web.archive.org/web/20241205024857/https://pyos.gith...

  f = x -> raise if
    x :: int  => IndexError x
    otherwise => ValueError x
Complete with pipelines, of course:

  "> {}: {}".format "Author" "stop using stale memes"
    |> print
sizeofchar•2mo ago
Wow, this is so awesome! A shame it didn’t progress.
agumonkey•2mo ago
not too far from that there's https://coconut-lang.org/
sanderjd•2mo ago
This is certainly neat. This isn't a criticism, but I think more like an expansion on the author's point:

The reason this all works is that generators plus memoization is "just" an implementation of the lazy sequences that Haskell has built in.

gerad•2mo ago
Isn’t the original python without the recursion lazy as well? I that was the entire point of generators.
sanderjd•2mo ago
You're right, fair enough. This isn't only about generators, it's also about how they interact with recursion and memoization.
mark_l_watson•2mo ago
Well, definitely very cool, but: the Haskell code is delightfully readable while the Python code takes effort for me to read. This is not meant as a criticism, this article is a neat thought experiment.
abirch•2mo ago
I agree that this was a great article.

For me, this isn't intuitive. It works; however, it doesn't scream recursion to me.

  def ints():
      yield 1
      yield from map(lambda x: x + 1, ints())
I preferred

  def ints():
      cnt = 1
      while True:
          yield cnt
          cnt += 1
thyrsus•2mo ago
I'm a python newby, so please correct the following: The first function looks quadratic in time and linear in stack space, while the second function looks linear in time and constant in stack space. Memoizing would convert the first function to linear in time and linear in space (on the heap instead of the stack). For python, wouldn't I always use the second definition? For Haskell, I would use the [1..] syntax which the compiler would turn into constant space linear time machine code.
trealira•2mo ago
Late response, but yes, you are completely right. You wouldn't use either implementation whether you're using Python or Haskell; you'd use what you said, because that's both the most obvious and the most performant method of achieving the goal. It's just a fun exercise to show that the version using a lazy map is equivalent to the obvious thing. Some people find it mind-bending and satisfying in the same way some people might find bit-twiddling hacks cool, or math nerds might find any of these mathematical identities cool.

https://math.stackexchange.com/questions/505367/collection-o...

louthy•2mo ago
I've never written a line of python in my life, so I'm interested in how this "recursive function" can do anything different on each recursive call if it takes no arguments?

    def ints():
        yield 1
        yield from map(lambda x: x + 1, ints())
Surely it would always yield a stream of `1`s? Seems very weird to my brain. "As simple as that" it is not!
rowanG077•2mo ago
Sure it yields 1. But then it adds one to each yield form the recursive call. And repeat.
lalaithion•2mo ago
The first item yielded from ints() is 1.

For the second item, we grab the first item from ints(), and then apply the map operation, and 1+1 is 2.

For the third item, we grab the second item from ints(), and then apply the map operation, and 1+2 is 3.

louthy•2mo ago
Got it, thanks! The syntax was throwing me a bit there.
lalaithion•2mo ago
It’s a pretty bad system since it takes O(n^2) time to produce n integers but ¯\_(ツ)_/¯. Haskell avoids the extra cost by immutable-self-reference instead of creating a new generator each time.
tomsmeding•2mo ago
EDIT: Wrong, the Haskell code is linear. See child comments.

The extra cost is not in the recursive calls, of which there is only one per returned number. The cost is in achieving a yielded value n by starting with 1 and adding 1 to it (n-1) times. The given Haskell code:

ints = 1 : map (+1) ints

has the exact same problem, and it's just as quadratic. It might have a somewhat better constant factor, though (even apart from Python being interpreted) because there's less function calls involved.

mrkeen•2mo ago
When in doubt, measure!

Your code didn't show a quadratic blowup in the timing:

  main = print . sum $ take 1000000 ints
  ints = 1 : map (+1) ints

  500000500000
  real    0m0.022s
  user    0m0.021s
  sys     0m0.000s
tomsmeding•2mo ago
Interesting! My intuition was wrong. I neglected to fully appreciate that the list is memoised.

What's happening, if I'm not mistaken, is that the unevaluated tail of the list is at all times a thunk that holds a reference to the cons cell holding the previous list item. Hence this is more like `iterate (+1) 1` than it seems at first glance.

lgas•2mo ago
I am fairly certain lists in Haskell are just normal singly linked lists with pointers pointing to the next item in the list. There's no need to maintain a reference to the previous cell. The last cell in any infinite list (or any finite list that hasn't been fully evaluated) will be an unevaluated thunk but that's not really observable to the program/programmer.
tomsmeding•2mo ago
Oh for sure, I was talking about the internal representation on the heap. The user program sees none of this. :)
cartoffal•2mo ago
> The $ operator is nothing but syntactic sugar that allows you to write bar $ foo data instead of having to write bar (foo data). That’s it.

Actually, it's even simpler than that: the $ operator is nothing but a function that applies its left argument to its right one! The full definition is

    f $ x = f x
(plus a directive that sets its precedence and association)
IshKebab•2mo ago
This kind of thing is emblematic of how little Haskellers care about readability and discoverability. I'm sure it's great for people that are already expert Haskellers but it adds yet another thing to learn (that's really hard to search for!) and the benefit is... you can skip a couple of brackets. Awesome.
yoyohello13•2mo ago
You can type `:doc $` in ghci and it will tell you how the function is defined and give you usage examples.
gymbeaux•2mo ago
We aren’t enthusiastic about having to do that either
yoyohello13•2mo ago
Why? It’s the official documentation. Most haskellers have gchi open at all times when coding.
gymbeaux•2mo ago
Because (as far as I know) that's not the case for most other languages. It's an interesting (and certainly not necessarily "wrong") way to program when you aren't sure about how to write something. Python has something similar, that I will occasionally utilize for this purpose, but it doesn't show documentation (as far as I know) or anything like that while it sounds like gchi has functionality kind of like CLI tools' "--help"?

With C#, intellisense takes the role of gchi and does pop up say the valid methods and properties of a class, and iirc includes documentation text.

So it's less about that haskell has "coding help" built in and more about how that's presented to and interacted by the developer.

itishappy•2mo ago
Python does indeed have similar functionality:

    $ py
    >>> help(print)

    $ ghci
    λ> :doc $
This returns the same documentation provided by intellisense.
gymbeaux•2mo ago
Right on.

To be clear, the core "issue" is whether developers use or like to use that functionality, and to that end I say I never use it for Python (obviously- since I didn't know of its existence). More succinctly, it seems like other languages have "more popular" alternatives.

itishappy•2mo ago
Agreed! I've never found myself making much use of Python's help feature, but I do all the time in Haskell! Wonder why?
gymbeaux•2mo ago
If you're being facetious, I would love to know why you use Haskell's but not Python's. Is it a better UX? A lack of alternatives like intellisense?
itishappy•2mo ago
Totally serious. I'm not claiming it's the best, but it's how I use the tools. There's a few reasons:

1. It's just how I learned. Googled my way through Python, then learned Haskell much later via a book which explicitly recommends using the REPL to get info about functions.

2. Types. I'm typically only looking for type information. A quick `:t` or `:i` is usually all I need in Haskell. I know Python half has them now, but I'm not

3. I'm certainly weird here, but I turn autocomplete and other pop-ups off. I find it distracting, and prefer the more intentional action of typing stuff into a REPL. Heck, I even do it in the VSCode terminal.

In summary: IDK, weird I guess. I should probably use it more in Python.

gizmo686•2mo ago
Skipping brackets is incredibly useful for readability. Basically every language that relies on brackets has settled on a style convention that makes the redundant with indentation. Admittedly, the big exception to this is function application. But Haskell is much more function oriented, so requiring brackets on function application is much more burdensome than in most languages.

As to searchability, this should be covered in whatever learn Haskell material you are using. And if it isn't, then you can literally just search for it in the Haskell search engine [0].

[0] https://hoogle.haskell.org/?hoogle=%24&scope=set%3Astackage

sabellito•2mo ago
I agree with this sentiment. The one thing I liked about Clojure was the simplicity of the syntax, as long as you kept away from writing macros.

In general, after so many decades programming, I've come to dislike languages with optional periods (a.foo) and optional parenthesis for function calls - little gain to a confusion of precedence, what's a field vs method. Seems that the whole DSL craze of 15 years ago was a mistake after all.

Having said all that, I think haskell is awesome, in the original sense of the word. I became a better programmer after working with it for a bit.

lgas•2mo ago
You're only new once, but you write lines that benefit from $ every day.
IshKebab•2mo ago
That argument values learnability and discoverability at 0.
rowanG077•2mo ago
No, it really doesn't. It values language ergonomics higher than learnability. That doesn't mean learnability is valued at 0. It has nothing to do with discoverability.
yoyohello13•2mo ago
What do you mean by discoverability? What is the bar? Like is `:=` not discoverable in python. Or are ternaries not discoverable in Javascript? I'd say these are equally mysterious to a new learner. Just because $ is unfamiliar, doesn't mean it's worse than any other language construct.

Also, literally the first hit when you google “what does $ do in Haskell” is an explanation. So it’s in the docs, easily googlable, and one of the first things all learning material explains. How is that not discoverable?

mrkeen•2mo ago
Excessive brackets are an evergreen complaint against lisp.

Here's one that made the front page, same time as your comment: https://news.ycombinator.com/item?id=43753381

rowanG077•2mo ago
Haskell has it's issues, but this really ain't it. $ is idiomatic and used all over the place and is greatly more readable than stacking up brackets. The discoverability is also great because, like most things in Haskell, you can literally just input it into hoogle: https://hoogle.haskell.org/?hoogle=%24 and the first hit is, of course the definition of it. Which includes a full explanation what it does.
pklausler•2mo ago
In Haskell, ($) = `id`, because "id f x" = "(id f) x" = "f x".
ltbarcly3•2mo ago
I have no idea why this is even slightly neat? Like, it's not surprising that it works, it's not clever, it's not performant, and you can't actually code like this because it will overflow the stack pretty quickly. It doesn't even save lines of code.

Alternatives that aren't dumb:

  for x in range(2**256):  # not infinite but go ahead and run it to the end and get back to me

  from itertools import repeat
  for x, _ in enumerate(repeat(None)):  # slightly more annoying but does work infinitely
Granted these aren't clever or non-performant enough to excite functional code fanboys.
mrkeen•2mo ago
The functional fanboys have moved onto languages that don't overflow the stack.
ltbarcly3•2mo ago
Read the article?
hedora•2mo ago
TIL haskell is Turing complete but only uses bounded memory.
dev360•2mo ago
So many people neglect the lack of TCO in Python, it’s like a beginner mistake you make when you try to make your Python read like the Haskell you just learned. Not sure why you’re getting downvoted though.
nexo-v1•2mo ago
I really like this idea too. Generators are one of my favorite parts of Python — super memory efficient, and great for chaining transformations. But in practice, I’ve found they can get hard to reason about, especially when you defer evaluation too much. Debugging gets tricky because you can’t easily inspect intermediate states.

When working with other engineers, I’ve learned to be careful: sometimes it’s better to just materialize things into a list for clarity, even if it’s less “elegant” on paper.

There’s a real balance between cleverness and maintainability here.

pletnes•2mo ago
There’s some stuff in itertools to cut sequences into batches. Could be a useful intermediate step - grab 100 things at a time and write functions that receive and emit lists, rather than generators all the way down.
ankitml•2mo ago
It is possible to test the chaining though, if you know your data well. If not, those edge cases in the data quality can throw things off balance very easily.
UltraSane•2mo ago
I often will intentionally store intermediate values in variables rather than just doing a clever one liner in Python specifically because I know it will make debugging easier.
benrutter•2mo ago
I like this! Tiny question, is the cache at the end any different from the inbuilt functools cache?