frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Glasses to detect smart-glasses that have cameras

https://github.com/NullPxl/banrays
135•nullpxl•3h ago•42 comments

Pocketbase – open-source realtime back end in 1 file

https://pocketbase.io/
221•modinfo•5h ago•87 comments

How Charles M Schulz created Charlie Brown and Snoopy (2024)

https://www.bbc.com/culture/article/20241205-how-charles-m-schulz-created-charlie-brown-and-snoopy
130•1659447091•9h ago•54 comments

Same-day upstream Linux support for Snapdragon 8 Elite Gen 5

https://www.qualcomm.com/developer/blog/2025/10/same-day-snapdragon-8-elite-gen-5-upstream-linux-...
407•mfilion•16h ago•196 comments

Vsora Jotunn-8 5nm European inference chip

https://vsora.com/products/jotunn-8/
101•rdg42•9h ago•26 comments

Implementing Bluetooth LE Audio and Auracast on Linux Systems

https://www.collabora.com/news-and-blog/blog/2025/11/24/implementing-bluetooth-le-audio-and-aurac...
45•losgehts•3d ago•0 comments

250MWh 'Sand Battery' to start construction in Finland

https://www.energy-storage.news/250mwh-sand-battery-to-start-construction-in-finland-for-both-hea...
249•doener•10h ago•155 comments

How to use Linux vsock for fast VM communication

https://popovicu.com/posts/how-to-use-linux-vsock-for-fast-vm-communication/
11•mfrw•3h ago•1 comments

Physicists drive antihydrogen breakthrough at CERN

https://phys.org/news/2025-11-physicists-antihydrogen-breakthrough-cern-technique.html
181•naves•5d ago•60 comments

A programmer-friendly I/O abstraction over io_uring and kqueue (2022)

https://tigerbeetle.com/blog/2022-11-23-a-friendly-abstraction-over-iouring-and-kqueue/
75•enz•10h ago•23 comments

GitLab discovers widespread NPM supply chain attack

https://about.gitlab.com/blog/gitlab-discovers-widespread-npm-supply-chain-attack/
171•OuterVale•17h ago•70 comments

Quake Engine Indicators

https://fabiensanglard.net/quake_indicators/index.html
252•liquid_x•4d ago•51 comments

Beads – A memory upgrade for your coding agent

https://github.com/steveyegge/beads
13•latchkey•4h ago•1 comments

Installing Java in 2025, and Version Managers

https://blog.hakanserce.com/post/version_managers/
30•hakanserce•3d ago•39 comments

TigerStyle: Coding philosophy focused on safety, performance, dev experience

https://tigerstyle.dev/
65•nateb2022•4h ago•59 comments

Migrating to Positron, a next-generation data science IDE for Python and R

https://posit.co/blog/positron-migration-guides
27•ionychal•4h ago•21 comments

Maxduino Review: Tape Cassette Emulator for Multiple Retro Computers

https://retrogamecoders.com/maxduino-review/
37•ibobev•3d ago•1 comments

Feedback doesn't scale

https://another.rodeo/feedback/
162•ohjeez•1d ago•59 comments

Memories of .us

https://computer.rip/2025-11-11-dot-us.html
162•sabas_ge•1d ago•54 comments

Comparing xeus-Haskell and ihaskell kernels

https://www.datahaskell.org/blog/2025/11/25/a-tale-of-two-kernels.html
6•mchav•2d ago•2 comments

GrapheneOS Moving Out of France

https://xcancel.com/GrapheneOS/status/1993035936800584103
44•LaSombra•2h ago•32 comments

Indie, alone, and figuring it out

https://danijelavrzan.com/posts/2025/11/indie-dev/
73•wallflower•4d ago•17 comments

Tell HN: Happy Thanksgiving

693•prodigycorp•1d ago•174 comments

Designing a Mechanical Calculator

https://signoregalilei.com/2025/11/22/designing-a-mechanical-calculator/
15•surprisetalk•4d ago•4 comments

Underrated reasons to be thankful V

https://dynomight.net/thanks-5/
193•numeri•12h ago•78 comments

Experimenting with Robin Hood Hashing

https://twdev.blog/2025/11/robin_hood/
13•signa11•4d ago•4 comments

TPUs vs. GPUs and why Google is positioned to win AI race in the long term

https://www.uncoveralpha.com/p/the-chip-made-for-the-ai-inference
363•vegasbrianc•19h ago•263 comments

DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning [pdf]

https://github.com/deepseek-ai/DeepSeek-Math-V2/blob/main/DeepSeekMath_V2.pdf
195•fspeech•13h ago•40 comments

Giving the Jakks Atari Paddle a Spin

https://nicole.express/2025/paddle-me-atari.html
27•ingve•4d ago•1 comments

Mixpanel Security Breach

https://mixpanel.com/blog/sms-security-incident/
233•jaredwiener•1d ago•109 comments
Open in hackernews

Haskelling My Python

https://unnamed.website/posts/haskelling-my-python/
172•barrenko•7mo ago

Comments

whalesalad•7mo ago
Generators are one of my favorite features of Python when used in this way. You can assemble complex transformation pipelines that don’t do any actual work and save it all for one single materialization.
nurettin•7mo ago
Excited to read their next blog where they discover functools.partial and return self.
nickpsecurity•7mo ago
I just discovered one from your comment. Thank you!
BiteCode_dev•7mo ago
Or, you know, use numpy.
shash•7mo ago
Yeah, that’s much more efficient.

But there’s something beautiful in the way that a Taylor expansion or a trigonometric identity emerge from the function definition. Also, it teaches an interesting concept in lazy evaluation.

I mean, why not write straight up assembler? That would be even more efficient…

brianberns•7mo ago
This idea comes from a functional pearl called "Power Series, Power Serious" [0], which is well worth reading.

I implemented the same thing myself in F#. [1]

[0]: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

[1]: https://github.com/brianberns/PowerSeries

abeppu•7mo ago
Huh, there must have been something in the water leading up to this. Also from 1998 is this paper, "Calculus in coinductive form" and neither of these cites the other. https://ieeexplore.ieee.org/document/705675
brianberns•7mo ago
These are indeed very similar. Thanks for the link!

The math is a bit over my head, but this formulation seems more difficult than the one I'm familiar with. For example, x^2 is represented as 0::0::2 instead of 0::0::1 (because 2! = 2) and x^3 is represented as 0::0::0::6 instead of 0::0::0::1 (because 3! = 6). Is there a benefit to that?

barrenko•7mo ago
I was introduced to the notion of power series two weeks ago, and now it's seemingly everywhere...
coliveira•7mo ago
Power series have been everywhere for 200 years!
barrenko•7mo ago
I am learning that indeed.
angra_mainyu•7mo ago
Power series are possibly one of the most powerful tools in analysis.
fp64•7mo ago
This is one of the reasons I hate python, it allows for so many weird things that are only borderline standardised, if at all. When I started with python decades ago, I was also all hype, list comprehensions and functional patterns everywhere, and why not monkey patch a third party library on runtime? Now I regularly see such code written by the new junior devs, painful to maintain, painful to integrate into a larger code base, most certainly failing with TorchScript, and plainly convoluted for the sake of appearing smart.

If you want to code it Haskell, have you considered using Haskell?

sanderjd•7mo ago
There is nothing weird or "borderline standardized" about generators...
fp64•7mo ago
Fair point, they have a proper PEP. I’ve been burnt by `lst[::-1]` and the likes which TorchScript does not (did not?) support and to my surprise learned that the standard is not clear on this behaviour. Generators are fine but I somewhat fail to see their use as they move state somewhere not really clear and I have seen plenty of problems with needing `list(generator())` confusing people
ltbarcly3•7mo ago
I wrote a whole response thinking you were like 19 but you say you started with Python decades ago. If you still haven't gotten the hang of it I don't think there's anything to say. Generators have been in python for more than 2 decades so I find your claims extremely suspect.
fp64•7mo ago
I’m using python since version 1 my friend, no need to get personal and insulting. I gave an example of something that is not defined properly in a PEP as well. I really do not like python, and one of the reasons is that is encourages writing of what I believe is code that is harder to understand and harder to maintain for people who didn’t write it. There’s nothing wrong with Haskell, but it has a rather steep learning curve when you’re not coming from functional programming and if you embrace patterns like this you put extra burden on your colleagues.
IshKebab•7mo ago
I also really dislike Python but actually their implementation of generators is pretty good IMO. Few languages have it, e.g. Rust still doesn't have generators.

They are pretty niche but when you find a situation that needs them they're really elegant. I've used them for generating deterministic test stimulus and it's really nice.

fp64•7mo ago
That’s maybe my point, maybe not. Generators can be useful, but somewhat niche as you said. I still prefer explicit code and try to avoid generators, I can’t remember the last time I wrote a yield. I disagree on their elegance. From my earlier days in python I remember writing a lot of convoluted things that felt smart at the time but turned out stupid, and I think representing infinite list with recursive generators in python is exactly that. Maybe a fun exercise to show what’s possible, but from painful experience I am quite certain someone will read articles like these and apply it everywhere without fully understanding it. I loved perl and how terse and expressive I could be, but after a little while of not using it (hah, python came to my life!) I had no clue anymore what my code was supposed to mean, and the same happened later to my “oh so elegant” python code eventually.

Nowadays I mostly work with python ML code that has to be exported as TorchScript, thus I’m very sensitive to things that don’t work there. Not per se a problem with python - but having rewritten a lot of code to make it work, pretty much each and every time I found the explicit, imperative rewrite much cleaner and easier to follow and understand

fp64•7mo ago
I am not sure what your argument is supposed to be. That I am wrong because I haven’t gotten the hang of it after all these years? I do understand generators. For a long time. I still believe they are usually an anti-pattern, same like massive abuse of inheritance where you have 20 classes but only a single instance but you made your code “future proof”.

The point I wanted to make is that using generator, in particular like here, is something that I consider ugly and difficult to maintain and it will probably have to be rewritten when trying to export to TorchScript. I really do not see how “just get a hang for it” can help me reevaluate my perspective

Edit: Maybe you were hung up on my “standardised” - I have to admit I do not know how thoroughly the PEP defines generators and if really all edge cases are defined without needing to check the python source code. From past experiences, my trust in python language standards is a bit shaky, as it had been difficult to reproduce the very exact behaviour using a different language, or python features - without requiring digging through the sources.

notpushkin•7mo ago
Absolutely unrelated, but there’s a Haskell-like syntax for Python: https://web.archive.org/web/20241205024857/https://pyos.gith...

  f = x -> raise if
    x :: int  => IndexError x
    otherwise => ValueError x
Complete with pipelines, of course:

  "> {}: {}".format "Author" "stop using stale memes"
    |> print
sizeofchar•7mo ago
Wow, this is so awesome! A shame it didn’t progress.
agumonkey•7mo ago
not too far from that there's https://coconut-lang.org/
sanderjd•7mo ago
This is certainly neat. This isn't a criticism, but I think more like an expansion on the author's point:

The reason this all works is that generators plus memoization is "just" an implementation of the lazy sequences that Haskell has built in.

gerad•7mo ago
Isn’t the original python without the recursion lazy as well? I that was the entire point of generators.
sanderjd•7mo ago
You're right, fair enough. This isn't only about generators, it's also about how they interact with recursion and memoization.
mark_l_watson•7mo ago
Well, definitely very cool, but: the Haskell code is delightfully readable while the Python code takes effort for me to read. This is not meant as a criticism, this article is a neat thought experiment.
abirch•7mo ago
I agree that this was a great article.

For me, this isn't intuitive. It works; however, it doesn't scream recursion to me.

  def ints():
      yield 1
      yield from map(lambda x: x + 1, ints())
I preferred

  def ints():
      cnt = 1
      while True:
          yield cnt
          cnt += 1
thyrsus•7mo ago
I'm a python newby, so please correct the following: The first function looks quadratic in time and linear in stack space, while the second function looks linear in time and constant in stack space. Memoizing would convert the first function to linear in time and linear in space (on the heap instead of the stack). For python, wouldn't I always use the second definition? For Haskell, I would use the [1..] syntax which the compiler would turn into constant space linear time machine code.
trealira•7mo ago
Late response, but yes, you are completely right. You wouldn't use either implementation whether you're using Python or Haskell; you'd use what you said, because that's both the most obvious and the most performant method of achieving the goal. It's just a fun exercise to show that the version using a lazy map is equivalent to the obvious thing. Some people find it mind-bending and satisfying in the same way some people might find bit-twiddling hacks cool, or math nerds might find any of these mathematical identities cool.

https://math.stackexchange.com/questions/505367/collection-o...

louthy•7mo ago
I've never written a line of python in my life, so I'm interested in how this "recursive function" can do anything different on each recursive call if it takes no arguments?

    def ints():
        yield 1
        yield from map(lambda x: x + 1, ints())
Surely it would always yield a stream of `1`s? Seems very weird to my brain. "As simple as that" it is not!
rowanG077•7mo ago
Sure it yields 1. But then it adds one to each yield form the recursive call. And repeat.
lalaithion•7mo ago
The first item yielded from ints() is 1.

For the second item, we grab the first item from ints(), and then apply the map operation, and 1+1 is 2.

For the third item, we grab the second item from ints(), and then apply the map operation, and 1+2 is 3.

louthy•7mo ago
Got it, thanks! The syntax was throwing me a bit there.
lalaithion•7mo ago
It’s a pretty bad system since it takes O(n^2) time to produce n integers but ¯\_(ツ)_/¯. Haskell avoids the extra cost by immutable-self-reference instead of creating a new generator each time.
tomsmeding•7mo ago
EDIT: Wrong, the Haskell code is linear. See child comments.

The extra cost is not in the recursive calls, of which there is only one per returned number. The cost is in achieving a yielded value n by starting with 1 and adding 1 to it (n-1) times. The given Haskell code:

ints = 1 : map (+1) ints

has the exact same problem, and it's just as quadratic. It might have a somewhat better constant factor, though (even apart from Python being interpreted) because there's less function calls involved.

mrkeen•7mo ago
When in doubt, measure!

Your code didn't show a quadratic blowup in the timing:

  main = print . sum $ take 1000000 ints
  ints = 1 : map (+1) ints

  500000500000
  real    0m0.022s
  user    0m0.021s
  sys     0m0.000s
tomsmeding•7mo ago
Interesting! My intuition was wrong. I neglected to fully appreciate that the list is memoised.

What's happening, if I'm not mistaken, is that the unevaluated tail of the list is at all times a thunk that holds a reference to the cons cell holding the previous list item. Hence this is more like `iterate (+1) 1` than it seems at first glance.

lgas•7mo ago
I am fairly certain lists in Haskell are just normal singly linked lists with pointers pointing to the next item in the list. There's no need to maintain a reference to the previous cell. The last cell in any infinite list (or any finite list that hasn't been fully evaluated) will be an unevaluated thunk but that's not really observable to the program/programmer.
tomsmeding•7mo ago
Oh for sure, I was talking about the internal representation on the heap. The user program sees none of this. :)
cartoffal•7mo ago
> The $ operator is nothing but syntactic sugar that allows you to write bar $ foo data instead of having to write bar (foo data). That’s it.

Actually, it's even simpler than that: the $ operator is nothing but a function that applies its left argument to its right one! The full definition is

    f $ x = f x
(plus a directive that sets its precedence and association)
IshKebab•7mo ago
This kind of thing is emblematic of how little Haskellers care about readability and discoverability. I'm sure it's great for people that are already expert Haskellers but it adds yet another thing to learn (that's really hard to search for!) and the benefit is... you can skip a couple of brackets. Awesome.
yoyohello13•7mo ago
You can type `:doc $` in ghci and it will tell you how the function is defined and give you usage examples.
gymbeaux•7mo ago
We aren’t enthusiastic about having to do that either
yoyohello13•7mo ago
Why? It’s the official documentation. Most haskellers have gchi open at all times when coding.
gymbeaux•7mo ago
Because (as far as I know) that's not the case for most other languages. It's an interesting (and certainly not necessarily "wrong") way to program when you aren't sure about how to write something. Python has something similar, that I will occasionally utilize for this purpose, but it doesn't show documentation (as far as I know) or anything like that while it sounds like gchi has functionality kind of like CLI tools' "--help"?

With C#, intellisense takes the role of gchi and does pop up say the valid methods and properties of a class, and iirc includes documentation text.

So it's less about that haskell has "coding help" built in and more about how that's presented to and interacted by the developer.

itishappy•7mo ago
Python does indeed have similar functionality:

    $ py
    >>> help(print)

    $ ghci
    λ> :doc $
This returns the same documentation provided by intellisense.
gymbeaux•7mo ago
Right on.

To be clear, the core "issue" is whether developers use or like to use that functionality, and to that end I say I never use it for Python (obviously- since I didn't know of its existence). More succinctly, it seems like other languages have "more popular" alternatives.

itishappy•7mo ago
Agreed! I've never found myself making much use of Python's help feature, but I do all the time in Haskell! Wonder why?
gymbeaux•7mo ago
If you're being facetious, I would love to know why you use Haskell's but not Python's. Is it a better UX? A lack of alternatives like intellisense?
itishappy•7mo ago
Totally serious. I'm not claiming it's the best, but it's how I use the tools. There's a few reasons:

1. It's just how I learned. Googled my way through Python, then learned Haskell much later via a book which explicitly recommends using the REPL to get info about functions.

2. Types. I'm typically only looking for type information. A quick `:t` or `:i` is usually all I need in Haskell. I know Python half has them now, but I'm not

3. I'm certainly weird here, but I turn autocomplete and other pop-ups off. I find it distracting, and prefer the more intentional action of typing stuff into a REPL. Heck, I even do it in the VSCode terminal.

In summary: IDK, weird I guess. I should probably use it more in Python.

gizmo686•7mo ago
Skipping brackets is incredibly useful for readability. Basically every language that relies on brackets has settled on a style convention that makes the redundant with indentation. Admittedly, the big exception to this is function application. But Haskell is much more function oriented, so requiring brackets on function application is much more burdensome than in most languages.

As to searchability, this should be covered in whatever learn Haskell material you are using. And if it isn't, then you can literally just search for it in the Haskell search engine [0].

[0] https://hoogle.haskell.org/?hoogle=%24&scope=set%3Astackage

sabellito•7mo ago
I agree with this sentiment. The one thing I liked about Clojure was the simplicity of the syntax, as long as you kept away from writing macros.

In general, after so many decades programming, I've come to dislike languages with optional periods (a.foo) and optional parenthesis for function calls - little gain to a confusion of precedence, what's a field vs method. Seems that the whole DSL craze of 15 years ago was a mistake after all.

Having said all that, I think haskell is awesome, in the original sense of the word. I became a better programmer after working with it for a bit.

lgas•7mo ago
You're only new once, but you write lines that benefit from $ every day.
IshKebab•7mo ago
That argument values learnability and discoverability at 0.
rowanG077•7mo ago
No, it really doesn't. It values language ergonomics higher than learnability. That doesn't mean learnability is valued at 0. It has nothing to do with discoverability.
yoyohello13•7mo ago
What do you mean by discoverability? What is the bar? Like is `:=` not discoverable in python. Or are ternaries not discoverable in Javascript? I'd say these are equally mysterious to a new learner. Just because $ is unfamiliar, doesn't mean it's worse than any other language construct.

Also, literally the first hit when you google “what does $ do in Haskell” is an explanation. So it’s in the docs, easily googlable, and one of the first things all learning material explains. How is that not discoverable?

mrkeen•7mo ago
Excessive brackets are an evergreen complaint against lisp.

Here's one that made the front page, same time as your comment: https://news.ycombinator.com/item?id=43753381

rowanG077•7mo ago
Haskell has it's issues, but this really ain't it. $ is idiomatic and used all over the place and is greatly more readable than stacking up brackets. The discoverability is also great because, like most things in Haskell, you can literally just input it into hoogle: https://hoogle.haskell.org/?hoogle=%24 and the first hit is, of course the definition of it. Which includes a full explanation what it does.
pklausler•7mo ago
In Haskell, ($) = `id`, because "id f x" = "(id f) x" = "f x".
ltbarcly3•7mo ago
I have no idea why this is even slightly neat? Like, it's not surprising that it works, it's not clever, it's not performant, and you can't actually code like this because it will overflow the stack pretty quickly. It doesn't even save lines of code.

Alternatives that aren't dumb:

  for x in range(2**256):  # not infinite but go ahead and run it to the end and get back to me

  from itertools import repeat
  for x, _ in enumerate(repeat(None)):  # slightly more annoying but does work infinitely
Granted these aren't clever or non-performant enough to excite functional code fanboys.
mrkeen•7mo ago
The functional fanboys have moved onto languages that don't overflow the stack.
ltbarcly3•7mo ago
Read the article?
hedora•7mo ago
TIL haskell is Turing complete but only uses bounded memory.
dev360•7mo ago
So many people neglect the lack of TCO in Python, it’s like a beginner mistake you make when you try to make your Python read like the Haskell you just learned. Not sure why you’re getting downvoted though.
nexo-v1•7mo ago
I really like this idea too. Generators are one of my favorite parts of Python — super memory efficient, and great for chaining transformations. But in practice, I’ve found they can get hard to reason about, especially when you defer evaluation too much. Debugging gets tricky because you can’t easily inspect intermediate states.

When working with other engineers, I’ve learned to be careful: sometimes it’s better to just materialize things into a list for clarity, even if it’s less “elegant” on paper.

There’s a real balance between cleverness and maintainability here.

pletnes•7mo ago
There’s some stuff in itertools to cut sequences into batches. Could be a useful intermediate step - grab 100 things at a time and write functions that receive and emit lists, rather than generators all the way down.
ankitml•7mo ago
It is possible to test the chaining though, if you know your data well. If not, those edge cases in the data quality can throw things off balance very easily.
UltraSane•7mo ago
I often will intentionally store intermediate values in variables rather than just doing a clever one liner in Python specifically because I know it will make debugging easier.
benrutter•7mo ago
I like this! Tiny question, is the cache at the end any different from the inbuilt functools cache?