frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Things you can do with a Software Defined Radio (2024)

https://blinry.org/50-things-with-sdr/
236•mihau•1h ago•48 comments

IINA Introduces Plugin System

https://iina.io/plugins/
18•xnhbx•17m ago•3 comments

CIA Freedom of Information Act Electronic Reading Room

https://www.cia.gov/readingroom
91•bookofjoe•3h ago•9 comments

Self Propagating NPM Malware Compromises over 40 Packages

https://www.stepsecurity.io/blog/ctrl-tinycolor-and-40-npm-packages-compromised
411•jamesberthoty•5h ago•319 comments

Letters of Note – Bertrand Russell to Oswald Mosley

https://lettersofnote.com/2016/02/02/every-ounce-of-my-energy/
4•giraffe_lady•5m ago•1 comments

Implicit Ode Solvers Are Not Universally More Robust Than Explicit Ode Solvers

https://www.stochasticlifestyle.com/implicit-ode-solvers-are-not-universally-more-robust-than-exp...
43•cbolton•2h ago•9 comments

Generative AI is hollowing out entry-level jobs, study finds

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5425555
141•zeuch•3h ago•118 comments

Tesla Faces US Auto Safety Investigation over Door Handles

https://www.bloomberg.com/news/articles/2025-09-16/tesla-tsla-faces-probe-by-us-auto-safety-agenc...
52•corvad•46m ago•31 comments

Mother of All Demos (1968)

https://wordspike.com/s/5ip0xneiTsc
61•thekuanysh•2h ago•25 comments

Robert Redford has died

https://www.nytimes.com/2025/09/16/movies/robert-redford-dead.html
314•uptown•4h ago•89 comments

60 years after Gemini, newly processed images reveal details

https://arstechnica.com/space/2025/09/60-years-after-gemini-newly-processed-images-reveal-incredi...
180•sohkamyung•3d ago•45 comments

Teen Safety, Freedom, and Privacy

https://openai.com/index/teen-safety-freedom-and-privacy
37•meetpateltech•3h ago•31 comments

Microsoft Favors Anthropic over OpenAI for Visual Studio Code

https://www.theverge.com/report/778641/microsoft-visual-studio-code-anthropic-claude-4
51•corvad•1h ago•18 comments

Java 25 Officially Released

https://mail.openjdk.org/pipermail/announce/2025-September/000360.html
90•mkurz•3h ago•18 comments

The old SF tech scene is dead. What it's morphing into is more sinister

https://www.sfgate.com/tech/article/bay-area-tech-scene-dorky-now-terrifying-21042943.php
56•jakemontero24•1h ago•28 comments

Scientists uncover extreme life inside the Arctic ice

https://news.stanford.edu/stories/2025/09/extreme-life-arctic-ice-diatoms-ecological-discovery
48•hhs•3d ago•16 comments

1975 Sep 16 MOS Technology samples 6502 at WESCON, here's how they designed it

https://www.EmbeddedRelated.com/showarticle/1453.php
5•jason_s•1h ago•1 comments

Learn x86-64 assembly by writing a GUI from scratch (2023)

https://gaultier.github.io/blog/x11_x64.html
201•ibobev•4d ago•22 comments

React is winning by default and slowing innovation

https://www.lorenstew.art/blog/react-won-by-default/
637•dbushell•22h ago•726 comments

"Your" vs. "My" in user interfaces

https://adamsilver.io/blog/your-vs-my-in-user-interfaces/
376•Twixes•13h ago•186 comments

Migrating to React Native's New Architecture

https://shopify.engineering/react-native-new-architecture
75•vidyesh•3d ago•47 comments

Adding FRM parser utility to MariaDB

https://hp77-creator.github.io/blogs/gsoc25
6•hp77•3d ago•0 comments

macOS Tahoe

https://www.apple.com/os/macos/
573•Wingy•23h ago•842 comments

William Gibson Reads Neuromancer (2004)

http://bearcave.com/bookrev/neuromancer/neuromancer_audio.html
284•exvi•19h ago•82 comments

Trucker built a scale model of NYC over 21 years

https://gothamist.com/arts-entertainment/this-trucker-built-a-scale-model-of-nyc-over-21-years-it...
65•speckx•3h ago•11 comments

Israel has committed genocide in the Gaza Strip, UN Commission finds

https://www.ohchr.org/en/press-releases/2025/09/israel-has-committed-genocide-gaza-strip-un-commi...
5•aabdelhafez•4m ago•0 comments

Wanted to spy on my dog, ended up spying on TP-Link

https://kennedn.com/blog/posts/tapo/
514•kennedn•23h ago•162 comments

WordNumbers: Counting letters of number names, alphabetized and concatenated

http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumbers1/
13•lupire•2d ago•2 comments

Hosting a website on a disposable vape

https://bogdanthegeek.github.io/blog/projects/vapeserver/
1291•BogdanTheGeek•22h ago•439 comments

Klotski

https://2swap.github.io/Klotski-Webpage/
17•surprisetalk•4d ago•4 comments
Open in hackernews

DumPy: NumPy except it's OK if you're dum

https://dynomight.net/dumpy/
174•RebelPotato•3mo ago

Comments

the__alchemist•3mo ago
Clear articulation of why Numpy syntax is provincial and difficult to learn. Perhaps the clearest part of it clicks when the author compares the triple-nested loop to the numpy-function approach. The former is IMO (And the author thinks so) much easier to understand, and more universal.
Gimpei•3mo ago
I’ve known some people who didn’t want to learn the syntax of numpy and did it all in loops, and the code was not easy to read. It was harder to read. The fundamental issue is that operations on high dimensional arrays are very difficult to reason about. Numpy can probably be improved, but I don’t think loops are the answer.
okigan•3mo ago
What’s a better syntax then?
tikhonj•3mo ago
The real question—to which I have absolutely no answer—is not about syntax, it's about concepts: what is a better way to think about higher-dimensional arrays rather than loops and indices? I'm convinced that something better exists and, if it existed, encoding it in a sufficiently expressive (ie probably not-Python) language would give us the corresponding syntax, but trying to come up with a better syntax without a better conceptual model won't get us very far.

Then again, maybe even that is wrong! "Notation as a tool for thought" and all that. Maybe "dimension-munging" in APL really is the best way to do these things, once you really understand it.

bee_rider•3mo ago
Numpy seems somewhat constrained here… it grew out of the matrix ecosystem, and matrices map naturally to two-dimensional arrays (sidenote: it’s super annoying that we have n-dimensional matrices and n-dimensional arrays, but the matrix dimension maps to the width of the array).

Anyway, the general problem of having an n-dimensional array and wanting to dynamically… I dunno, it is a little tricky. But, sometimes when I see the examples people pop up with, I wonder how much pressure could be relieved if we just had a nice way of expressing operations on block or partitioned matrices. Like the canonical annoying example of wanting to apply solve using a series of small-ish matrices on a series of vectors, that’s just a block diagonal matrix…

CamperBob2•3mo ago
English. "Write me a Python function or program that does X, Y, and Z on U and V using W." That will be the inevitable outcome of current trends, where relatively-primitive AI tools are used to write slightly more sophisticated code than would otherwise be written, which in turn is used to create slightly less-primitive AI tools.

For example, I just cut-and-pasted the author's own cri de coeur into Claude: https://claude.ai/share/1d750315-bffa-434b-a7e8-fb4d739ac89a Presumably at least one of the vectorized versions it replied with will work, although none is identical to the author's version.

When this cycle ends, high-level programs and functions will be as incomprehensible to most mainstream developers as assembly is today. Today's specs are tomorrow's programs.

Not a bad thing, really. And overdue, as the article makes all too clear. But the transition will be a dizzying one, with plenty of collateral disruption along the way.

willseth•3mo ago
why_not_both.gif
breppp•3mo ago
I've read the article and it didn't seem to me the author is suggesting loops
dahart•3mo ago
The point here is not that it’s loops per se, the point is that the indexing is explicit. It seems like a big win to me. The article’s ~10 non-trivial examples all make the code easier to read, and more importantly, to understand exactly what the code is doing. It is true that some operations are difficult to reason about, that’s where explicit indexing really helps. The article resonates with me because I do want to learn numpy syntax, I’ve written hundreds of programs with nympy, spent countless hours doing battle with it, and I feel like I’m no better off now than someone who’s brand new to it. The indexing is constantly confounding, nothing ever just works. Anytime you see “None” and “axis=“ inside an operation, it’s a tell: bound to be difficult to comprehend. I’m always having to guess how to use some combination of reshape, dstack, hstack, transpose, and five other shape changers I’m forgetting, just to get something to work and it’s difficult to read and understand later. It feels like there is no debugging, only rewriting. I keep reading the manual for einsum over again and I’ve used it, but I can’t explain how, why, or when to use it, it seems like this thing you have to resort to because no other indexing seems to work. The ability to do straightforward explicit non-clever indexing as if you were writing loops seems like a pretty big step forward.
collingreen•3mo ago
I involuntarily whispered "reshape" to myself near the top of your comment. Numpy is a very different way for me to think and I have similar feelings to what you're describing.
cl3misch•3mo ago
I could never understand why people use dstack, hstack and the like. I think plain np.stack and specifying the axis explicitely is easier to write and to read.

For transposes, np.einsum can be easier to read as it let's you use (single character, admittedly) axis specifiers to "name" them.

okigan•3mo ago
Fantastic article.

I don’t use numpy often enough - but this explains the many WTF moments why it’s so annoying to get numpy pieces to work together.

crescit_eundo•3mo ago
Dupe. Posted a number of times the past day and a half:

https://news.ycombinator.com/item?id=44072775

https://news.ycombinator.com/item?id=44063553

https://news.ycombinator.com/item?id=44078019

https://news.ycombinator.com/item?id=44063490

homarp•3mo ago
but only https://news.ycombinator.com/item?id=44063490 has 'some' comments. so this current discussion is better
gus_massa•3mo ago
There is an "older" discussion with a different title: https://news.ycombinator.com/item?id=43996431 (488 points | 9 days ago | 212 comments)
palmtree3000•3mo ago
That's a different post.
gus_massa•3mo ago
You are right. It's a 1 letter difference in the URL and I missed it. Sorry for the noise.
kevmo314•3mo ago
I’m not convinced by the loops proposal. TensorFlow had this kind of lazy evaluation (except I guess TF was the worst of both worlds) and it makes debugging very difficult to the point that I believe it’s the main reason PyTorch won out. Such systems are great if they work perfectly but they never do.

NumPy definitely has some rough edges, I sympathize with the frustrations for sure.

mpascale00•3mo ago
In the way that `ggplot2` tries to abstract those common "high dimensional" graphing components into an intuitive grammar, such that you may in many places be able to guess the sequence of commands correctly, I would love to see an equivalent ergonomic notation. This gets part way there by acknowledging the problem.

Mathematical operations aren't obliged to respect the Zen of Python, but I like the thought that we could make it so that most have an obvious expression.

shiandow•3mo ago
That is amazing. My main doubt would be how future proof this is. Does it wrap numpy? Or something equivalent? Does it require continuous development to keep up?

Also I both understand the need for declaring the matrices up front and think it's a bit of a shame that it is not seamless.

Here are some (ill-advised?) alternatives:

    X, Y, Z = dp.Slots()
    
    with dp.Slot() as X:
        ...
    
    import dp.Slot as new
    X = new['i','j'] = ...

    X = dp['i','j'] = ...
darepublic•3mo ago
Need transpilation rather than relying on this library being present. I like the idea alot though
dynm•3mo ago
(author here) This wraps JAX and JAX's version of NumPy. It would surely require some development to keep up, although it's quite short and simple (only 700 lines), so I don't think it would be a big burden. That said, I should be clear that my goal here is just to show that this is possible/easy, and possibly inspire existing array packages to consider adding this kind of syntax.

I like your alternatives! I agree that having to write

  X = dp.Slot()
before assigning to X is unfortunate. I settled on the current syntax mostly just because I thought it was the choice that made it most "obvious" what was happening under the hood. If you really want to, you could use the walrus operator and write

  (X := dp.Slot())['i','j'] = ...
but this cure seems worse than the disease...

Actually, doing something like

  X = new['i','j'] = ...
could simplify the implementation. Currently, a Slot has to act both like an Array and a Slot, which is slightly awkward. If new is a special object, it could just return an Array, which would be a bit nicer.
turtletontine•3mo ago
Pretty sure Numpy’s einsum[1] function allows all of this reasoning in vanilla numpy (albeit with a different interface that I assume this author likes less than theirs). Quite sure that first example of how annoying numpy can be could be written much simpler with einsum.

[1]: https://numpy.org/doc/stable/reference/generated/numpy.einsu...

jampekka•3mo ago
Sure, but einsum needs a syntax and concepts of its own, and more importantly does not work if you need to do something else than a limited set of matrix operations.
sottol•3mo ago
The author posted a previous article about why they don't like numpy and his problems with einsum:

https://dynomight.net/numpy/

adgjlsfhk1•3mo ago
I think you could for linear solves into a "generalized Einstein notation", but the other option you have is to support more complex array types, in which case a batched linearsolve can be reframed as a single linearsolve by a block-diagonal matrix
yorwba•3mo ago
How do you invert a matrix with einsum?
BeetleB•3mo ago
Why do you need to invert a matrix?
yorwba•3mo ago
It's what the "first example of how annoying numpy can be" does, using either np.linalg.solve in a loop or a cursed multidimensional index rearrangement.
rfoo•3mo ago
The "cursed multidimensional index rearrangement" is mostly for doing inner product in the weird way demonstrated. Granted, it proves author's point - you need to be fluent in NumPy to write this and can't take the Go approach of "I refuse to learn anything you better make me up to speed in 5 minutes".

The code could be just:

    AiX = np.linalg.solve(A, X[:, np.newaxis, :, np.newaxis]).squeeze()
    Z = np.vecdot(Y, AiX)
Or if you don't like NumPy 2.0:

    AiX = np.linalg.solve(A, X[:, np.newaxis, :])
    Z = np.einsum("jk,ijk->ij", Y, AiX)
The NumPy 1.x code is very intuitive, you have A[i, j, n] and X[i, n] and you want to use the same X[i] for all A[i, j], so just add an axis in the middle. Broadcast, which the author very strongly refuse to understand, deals with all the messy parts.

Alternatively, if you hate `[:, np.newaxis, :]` syntax, you may do:

    I, J, K, K = A.shape
    AiX = np.linalg.solve(A, X.reshape(I, 1, K, 1)).squeeze()
    Z = np.vecdot(Y, AiX)
I can read this faster than nested for-loops. YMMV.
ltbarcly3•3mo ago
The kind of person with the background to need these operations, and who is working on the kinds of problems where this stuff comes up, is more than capable of learning numpys syntax. Its not that bad.

Avoiding the special purpose tools designed by and used by the people who work on these problems every day is the instinct of someone who has just started needing them and wants to solve new kinds of problems with the tools they already know. But the people with experience don't do it that way (for a reason), you need to learn what the current best solution is before you can transcend it.

Guys, every few months some well meaning person posts their great new library here that "fixes" something to remove all that pesky complexity. Invariably its made by some beginner with whatever it is who doesn't understand why the complexity exists, and we never hear of them or the library again. This is absolutely one of those times.

lblume•3mo ago
I agree that this may sometimes be the case, but in this case the author does seem to have very strong points in favor of the proposal. I also think that the accusation of dynomight not understanding NumPy enough is unjustified based on the points given.
Tarq0n•3mo ago
This is how terrible API's are kept in use, wasting professional's time and making the field way less accessible for newcomers.

Compare ggplot to matplotlib. They both serve the same purpose but one has an API you can actually reason about and use without needing to consult the documentation every five minutes.

ltbarcly3•3mo ago
matplotlib is just matlab. its a free matlab implementation. if you don't know matlab then absolutely don't use matplotlib, its a terrible library except that one thing. This is what I'm talking about though, you are complaining about something without knowing why it is the way it is.
jampekka•3mo ago
Lots to like here but I'm not so sure about this:

> In DumPy, every time you index an array or assign to a dp.Slot, it checks that all indices have been included.

Not having to specify all indices makes for more generic implementations. Sure, the broadcasting rules could be simpler and more consistent, but in the meantime (implicit) broadcasting is what makes NumPy so powerful and flexible.

Also I think straight up vmap would be cleaner IF Python did not intentionally make lambdas/FP so restricted and cumbersome apparently due to some emotional reasons.

cyanydeez•3mo ago
Implicir means write once easy, debug, extend, read hard.
jampekka•3mo ago
It also means that you don't have to reimplement the same things all over again when your dimensions change.
cyanydeez•3mo ago
assuming you understand how the inconsist broadcast works, which is part of the problem.
thanhhaimai•3mo ago
For solo works, the terseness might work, but usually only in short term. Code I wrote 6 months ago looks like someone else's code. For team work, I'd prefer to be explicit if possible. It saves both my teammates' time and my time (when I eventually forget my own code 6 months from now).
jampekka•3mo ago
It's not (only) about terseness. It's about generality.
netbioserror•3mo ago
I think this sort of DSL construction is a perfect fit for languages with macros: Lisp, Nim, etc. I spend a lot of time on both so I might explore the possibilities. What should a higher-dimensional array indexing, looping, and broadcasting syntax even look like, if until now it's just been cludges? Is it just APL but with actual words?
mlochbaum•3mo ago
Author of BQN here, I agree with how section "What about APL?" describes the APL family as not fundamentally better (although details like indexing are often less messy). I outlined a system with lexically-scoped named axes at https://gist.github.com/mlochbaum/401e379ff09d422e2761e16fed... . The linear algebra example would end up something like this:

    solve(X[i,_], Y[j,_], A[i,j,_,_]) = over i, j/+: Y * linalg_solve(A, X)
adgjlsfhk1•3mo ago
hear me out: Julia.
bbminner•3mo ago
Always wanted to experiment with a syntax like this myself, thanks to author! I completely agree with you reasoning re complexity of mental model for indexing vs broadcasting. Moreover, it appears to me that such a representation should allow for finding more optimal low level impls (something like deeper "out of order" op fusing, idk)? I've seen a paper from either nvidia or meta around five years ago doing exactly that - translating an index-based meta-language built on top of python into cuda kernels (usually several variants and picking the best), can't find the reference unfortunately.
bee_rider•3mo ago
It seems like a neat idea. If it can just be layered on top of Jax pretty easily… I dunno, seems so simple it might actually get traction?

I wish I could peek at the alternative universe where Numpy just didn’t include broadcasting. Broadcasting is a sort of ridiculous idea. Trying to multiply a NxM matrix by a 1x1 matrix… should return an error, not perform some other operation totally unrelated to matrix multiplication!

jampekka•3mo ago
Broadcasting is an excellent idea. Implementations do have warts, but e.g. pytorch would be really painful without it.

Broadcasting is a sort of generalization of the idea of scalar-matrix product. You could make that less "ridiculous" by requiring a Hadamard product with a constant value matrix instead.

bee_rider•3mo ago
Scalar-matrix multiplication doesn’t seem that weird; automatically converting a 1x1 matrix to a scalar is the weird seeming thing, IMO.
duchenne•3mo ago
Looks awesome. Can we get the same thing for pytorch?
joshjob42•3mo ago
I think I'll just use Julia, since it has great libraries for doing matrix operations and mapping loops etc onto GPUs and handling indices etc. And if you need some Python library it's easily available using the fantastic PyCall library.

This is a big improvement over numpy, but I don't see much of a compelling reason to go back to Python.

jampekka•3mo ago
> I don't see much of a compelling reason to go back to Python.

Being able to use sane programming workflows is quite a compelling reason.

adgjlsfhk1•3mo ago
what sane programming workflows do you find Julia lacking?
jampekka•3mo ago
Running programs.
odo1242•3mo ago
Isn’t it just “julia script-name.jl”?
adgjlsfhk1•3mo ago
Yes. Yes it is.
3eb7988a1663•3mo ago
Minor style complaint, but the code font is really light.
credit_guy•3mo ago
I don't think this is such a great solution. With regular numpy you can create functions that take arrays with arbitrary shapes. It is not easy, as the author explained in a prior post, but it is doable. But with loops this just doesn't work. The number of nested loops depends on the shape of the inputs.

My solution, very, very inelegant, was to focus on the scalar case, and ask Copilot to vectorize my code. I did that just the last two days. It is not all that easy. Copilot will give you something, but you still need to tweak it. In the end, I still had to write line by line the code, and make sure I understand what it is doing.

culebron21•3mo ago
I'm currently working on a tiny project relying on NumPy and agree it's too complicated, and it's hard to reason and to read your own code later.

But the proposal is very questionable. I like loops instead of broadcast (apply_...), but making them this kind of black magic just adds equal amount of mental load.

As for compiled languages - they do optimize for SIMD, if you're ok running on CPU. Optimizing for CUDA and GPU parallelization... that's quite a niche demand, I think, to justify inconveniences for the more common users.

cl3misch•3mo ago
How does this compare to xarray, which also introduces named axes on numpy arrays?

https://github.com/pydata/xarray

cl3misch•3mo ago
> In NumPy, AB works if A and B are both scalar. Or if A is 5×1×6 and B is 5×1×6×1. But not if A is 1×5×6 and B is 1×5×6×1. Huh?

> So, I removed it. In DumPy you can only do A

B if one of A or B is scalar or A and B have exactly the same shape. That’s it, anything else raises an error. Instead, use indices, so it’s clear what you’re doing.

I also dislike the seemingly inconsistent adding of last axes with length 1.

I do like the capability that existing axes of length 1 will be broadcasted to any length of the other array in the operation, and use that all the time.

If I understand the text correctly, the author removed that as well.