frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Linux gaming is faster because Windows APIs are becoming Linux kernel features

https://www.xda-developers.com/linux-gaming-is-getting-faster-because-windows-apis-are-becoming-l...
456•haunter•3d ago•286 comments

Setting up a free *.city.state.us locality domain (2025)

https://fredchan.org/blog/locality-domains-guide/
460•speckx•9h ago•150 comments

A History of IDEs at Google

https://laurent.le-brun.eu/blog/a-history-of-ides-at-google
245•laurentlb•4d ago•183 comments

Princeton mandates proctoring for in-person exams, upending 133 year precedent

https://www.dailyprincetonian.com/article/2026/05/princeton-news-adpol-proctoring-in-person-exami...
207•bookofjoe•3h ago•268 comments

Chess puzzle I found in my dad's old book

https://ardoedo.it/kempelen/
54•Eswo•2d ago•16 comments

Marco Polo: Finding a friend with only distance and motion

https://www.jackhogan.me/blog/marco-polo
13•jackhogan11•2d ago•1 comments

The Emacsification of Software

https://sockpuppet.org/blog/2026/05/12/emacsification/
164•rdslw•16h ago•108 comments

S-100 Virtual Workbench

https://grantmestrength.github.io/S100/
95•rbanffy•8h ago•20 comments

Xs of Y – roguelike that names itself every run. Written in 4kLoC

https://github.com/nooga/xsofy
141•andsoitis•3d ago•64 comments

Launch HN: Ardent (YC P26) – Postgres sandboxes in seconds with zero migration

https://www.tryardent.com/
60•vc289•7h ago•22 comments

Tell HN: Dont use Claude Design, lost access to my projects after unsubscribing

112•pycassa•2h ago•44 comments

Twin brothers wipe 96 government databases minutes after being fired

https://arstechnica.com/tech-policy/2026/05/drop-database-what-not-to-do-after-losing-an-it-job/
260•jnord•1d ago•197 comments

The US is winning the AI race where it matters most: commercialization

https://avkcode.github.io/blog/us-winning-ai-race.html
148•akrylov•10h ago•412 comments

A sentimental tour of late 1990s and early 2000s hacking tools

https://andreafortuna.org/2026/05/13/amarcord/
38•speckx•5h ago•13 comments

Reverting the incremental GC in Python 3.14 and 3.15

https://discuss.python.org/t/reverting-the-incremental-gc-in-python-3-14-and-3-15/107014
190•curiousgal•4d ago•71 comments

How can Apple deal with the memory shortage?

https://asymco.com/2026/05/11/the-great-memory-panic-of-2026/
66•tambourine_man•2d ago•44 comments

Intercom changes name to Fin

https://www.intercom.com/blog/today-intercom-becomes-fin/
15•RyanShook•47m ago•8 comments

New stainless steel can survive conditions for hydrogen production in seawater

https://www.sciencedaily.com/releases/2026/05/260510030950.htm
286•HardwareLust•2d ago•137 comments

Medicare's new payment model is built for AI. Most of the tech world has no idea

https://techcrunch.com/2026/05/12/medicares-new-payment-model-is-built-for-ai-and-most-of-the-tec...
34•brandonb•2h ago•23 comments

Comparing a 1980s memory map to the Raspi Pico

https://medium.com/@noborutakahashi/a-40-year-old-memory-map-comparable-to-todays-raspberry-pi-pi...
14•Schlagbohrer•3d ago•0 comments

An idiot's guide to lead optimisation for proteins

https://magnusross.github.io/posts/protein-lead-optimisation-1/
134•magni121•2d ago•12 comments

Making the news available at no cost is a victory

https://www.sltrib.com/opinion/commentary/2026/05/12/just-days-tribune-reporting/
97•danso•4h ago•102 comments

Meta won't let you block its AI account on Threads

https://www.theverge.com/tech/929091/meta-ai-threads-account-block
74•logickkk1•3h ago•27 comments

Leaving GitHub for Forgejo

https://jorijn.com/en/blog/leaving-github-for-forgejo/
515•jorijn•11h ago•276 comments

Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model

https://github.com/cactus-compute/needle
633•HenryNdubuaku•1d ago•181 comments

Preserving Fisher-Price Pixter

https://dmitry.gr/?r=05.Projects&proj=37.%20Pixter
202•dmitrygr•2d ago•42 comments

I moved my digital stack to Europe

https://monokai.com/articles/how-i-moved-my-digital-stack-to-europe/
869•monokai_nl•12h ago•530 comments

MacBook Neo Deep Dive: Benchmarks, Wafer Economics, and the 8GB Gamble

https://www.jdhodges.com/blog/macbook-neo-benchmarks-analysis/
134•tosh•5h ago•121 comments

"Not Medically Necessary": Helping America's Health Insurers Deny Coverage

https://www.propublica.org/article/evicore-health-insurance-denials-cigna-unitedhealthcare-aetna-...
154•ceejayoz•4h ago•118 comments

Substrate (YC S24) Is Hiring a Technical Success Manager

https://www.ycombinator.com/companies/substrate/jobs/T2fMBhD-technical-success-manager
1•kunle•11h ago
Open in hackernews

Comparing Parallel Functional Array Languages: Programming and Performance

https://arxiv.org/abs/2505.08906
91•vok•12mo ago

Comments

yubblegum•12mo ago
Chapel got a mention in the 'Related Work' section. I looked at it a few years ago and found it compelling (but I don't do HPC so it was just window watching). What's the HN feedback on Chapel?

https://chapel-lang.org/

marai2•12mo ago
If you scroll down on the Chapel-lang website, there seems to be a lot of activity happening with this language. There is even going to be a ChapelCon 2025.

https://chapel-lang.org/blog/posts/chapelcon25-announcement/

throwaway17_17•11mo ago
Chapel and Lustre (a parallel, distributed file system) from Cray were funded by DARPA’s High Productivity Computing Systems program. This work, along with Fortress, from Sun, were developed explicitly to enable and ‘simplify’ the programming of distributed “supercomputers”. The work and artifacts, along with the published documentation and research is of particularly high quality.

Even if you aren’t involved in HPC I’d say the concepts transfer or provide a great basis for parallel and distributed idioms and methodologies that can be adapted to existing languages or used in development of new languages.

TL;DR - Chapel is cool and if you are interested in the general subject matter (despite a different focus) Fortress, which is discontinued, should also be checked out.

bradcray•11mo ago
@yubblegum: I'm unfairly biased towards Chapel (positively), so won't try to characterize HN's opinion on it. But I did want to note that while Chapel's original and main reason for being is HPC, now that everyone lives in a parallel-computing world, users also benefits from using Chapel in desktop environments where they want to do multicore and/or GPU programming. One such example is covered in this interview with an atmospheric science researcher for whom it has replaced Python as his go-to desktop language: https://chapel-lang.org/blog/posts/7qs-dias/
yubblegum•11mo ago
Thank you Brad! I was in fact wondering about GPU use myself. Does it work with Apple's M# GPUs?

Btw, I was looking at the docs for GPU [1] and unsolicited feedback from a potential user is that the setup process needs to become less painful. For example, yesterday installed it via brew but then hit the setup page for GPU and noted I now needed to build from source.

(Back in the day, one reason some of Sun's Java efforts to extend Java's fieddom faltered was because of the friction of setup for (iirc) things like Applets, etc. I think Chapel deserves a far wider audiance.)

[1]: https://chapel-lang.org/docs/technotes/gpu.html#setup (for others - you obviously know the link /g)

p.s. just saw your comment from last year - dropping it here for others: https://news.ycombinator.com/item?id=39032481

bradcray•11mo ago
@yubblegum: I'm afraid we don't have an update on support for Apple GPUs since last year's comment. While it comes up from time-to-time, nobody has opened an issue for it yet (please feel encouraged to!), and it isn't something we've had the chance to prioritize, where a lot of our recent work has focused on improving tooling support and addressing user requests.

I'll take your feedback about simplifying GPU-based installs back to our team, and have noted it on this thematically related issue: https://github.com/chapel-lang/chapel/issues/25187#issuecomm...

munchler•12mo ago
Are these languages pure in the functional sense? E.g. Do they allow/encourage mutation? My understanding is that APL permits mutable state and side effects, but maybe they are rarely used in practice? If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.
zfnmxt•12mo ago
Futhark, SaC, and Accelerate have purely functional semantics. Futhark has something called "in-place updates" that operationally mutate the given array, but semantically they work as if a new array is created (and are statically guaranteed to work this way by the type system).
RodgerTheGreat•12mo ago
APL arrays are values in the same sense as value types in any functional language. You don't explicitly modify arrays in-place; if they happen to have a refcount of 1 operations may happen in-place as an optimization, but not in a manner which observably alters program behavior.
grg0•11mo ago
Accelerate is a Haskell library/eDSL.
axman6•11mo ago
I wasn’t expecting to personally know two of the authors, but having Accelerate included makes sense.
geocar•11mo ago
> My understanding is that APL permits mutable state and side effects ... If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.

      a←'hello'
      a[1]←'c'
This does _not_ modify the array in-place. It's actually the same as:

     a←'hello'
     a←'c'@1⊢a
which is more obviously functional. It is easy to convince yourself of this:

      a←'hello'
      b←a
      b[1]←'j'
      a,b
returns 'hellojello' and not 'jellojello'.
teleforce•12mo ago
Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.

D language have excellent support functional and array features with parallel support. On top that not known to others it has high performance native BLAS kind of library with ergonomic and intuitiveness similar to python [1].

[1] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen (2016):

http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...

zfnmxt•11mo ago
> Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.

That's incorrect. Futhark doesn't even have linear algebra primitives---everything has to be done in terms of map/reduce/etc: https://github.com/diku-dk/linalg/blob/master/lib/github.com...

tomsmeding•11mo ago
The same holds for Accelerate, and I'm fairly sure also SaC and APL. DaCe even gets a special mention in the paper in section 10.5 stating that they specifically _do_ use BLAS bindings.
joe_the_user•11mo ago
"Notice that all the all the languages mentioned depends on the external BLAS library". I didn't notice this 'cause I don't think it's true. For example, it highly implausible that APL[1] would depend on BLAS[2] considering APL predates BLAS by 5-10 years ("developed in the sixties" versus "between 1971 and 1973"). I don't think Futhark uses BLAS either but in modern stupidity, this currently two hour old parent has taken over Google results so it's hard to find references.

[1] https://en.wikipedia.org/wiki/APL_(programming_language)

[2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogra...

DrNosferatu•11mo ago
Matlab supposedly is “portable APL”.
DrNosferatu•11mo ago
the man who invented MATLAB, Cleve Moler said: [I’ve] always seen MATLAB as “portable APL”. [1]

…why the downvoting?

[1] - https://computinged.wordpress.com/2012/06/14/matlab-and-apl-...

beagle3•11mo ago
I didn't downvote, but ... as someone who used both, this statement seems nonsensical.

APL is mathematical notation that is also executable. It is all about putting a mathematical algorithm in a succinct, terse way.

MATLAB is a clunky Fortran-like language that does simple 2D matrix stuff reasonably terse (though not remotely as terse as APL), and does everything else horribly awkwardly and verbosely.

Modern MATLAB might be comparable to 1960s APL, but original MATLAB was most certainly not, and even modern MATLAB isn't comparable to modern APL (and its successors such as BQN and K)

devlovstad•11mo ago
I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA. While I have not used any of these languages since, I have used JAX[1] quite a lot, where the learnings from this course have been quite helpful. Many people will end up writing code for GPUs through different levels of abstraction, but those who are able to reason about the semantics through functional primitives might have an easier time understanding what's happening under the hood.
vanderZwan•11mo ago
I think the intended footnote was accidentally left out. Were you talking about this Python library?

https://docs.jax.dev/en/latest/index.html

tough•11mo ago
There's a JAX for AI/LM too

https://github.com/jax-ml/jax

but yeah no idea which the OP meant

zfnmxt•11mo ago
> I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA.

PMPH? :)