frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How does misalignment scale with model intelligence and task complexity?

https://alignment.anthropic.com/2026/hot-mess-of-ai/
63•salkahfi•1h ago•14 comments

The Codex App

https://openai.com/index/introducing-the-codex-app/
511•meetpateltech•7h ago•328 comments

Anki ownership transferred to AnkiHub

https://forums.ankiweb.net/t/ankis-growing-up/68610
212•trms•5h ago•56 comments

GitHub experience various partial-outages/degradations

https://www.githubstatus.com?todayis=2026-02-02
142•bhouston•4h ago•34 comments

xAI joins SpaceX

https://www.spacex.com/updates#xai-joins-spacex
441•g-mork•4h ago•1002 comments

The Connection Machine CM-1 "Feynman" T-shirt

https://tamikothiel.com/cm/cm-tshirt.html
24•tosh•3d ago•4 comments

Ask HN: Who is hiring? (February 2026)

237•whoishiring•9h ago•300 comments

Julia

https://borretti.me/fiction/julia
29•ashergill•2h ago•3 comments

Hacking Moltbook

https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
241•galnagli•9h ago•151 comments

Court orders restart of all US offshore wind power construction

https://arstechnica.com/science/2026/02/court-orders-restart-of-all-us-offshore-wind-construction/
180•ck2•3h ago•73 comments

Joedb, the Journal-Only Embedded Database

https://www.joedb.org/index.html
36•mci•3d ago•6 comments

Firefox Getting New Controls to Turn Off AI Features

https://www.macrumors.com/2026/02/02/firefox-ai-toggle/
60•stalfosknight•2h ago•18 comments

4x faster network file sync with rclone (vs rsync) (2025)

https://www.jeffgeerling.com/blog/2025/4x-faster-network-file-sync-rclone-vs-rsync/
256•indigodaddy•3d ago•130 comments

Nano-vLLM: How a vLLM-style inference engine works

https://neutree.ai/blog/nano-vllm-part-1
217•yz-yu•13h ago•24 comments

Advancing AI Benchmarking with Game Arena

https://blog.google/innovation-and-ai/models-and-research/google-deepmind/kaggle-game-arena-updates/
106•salkahfi•8h ago•46 comments

The largest number representable in 64 bits

https://tromp.github.io/blog/2026/01/28/largest-number-revised
79•tromp•7h ago•55 comments

Zig Libc

https://ziglang.org/devlog/2026/#2026-01-31
146•ingve•8h ago•49 comments

Todd C. Miller – Sudo maintainer for over 30 years

https://www.millert.dev/
295•wodniok•8h ago•171 comments

Carnegie Mellon Unversity Computer Club FTP Server

http://128.237.157.9/pub/
3•1vuio0pswjnm7•4d ago•1 comments

Training a trillion parameter model to be funny

https://jokegen.sdan.io/blog
14•sdan•6d ago•7 comments

Ask HN: Who wants to be hired? (February 2026)

96•whoishiring•9h ago•234 comments

Geologists may have solved mystery of Green River's 'uphill' route

https://phys.org/news/2026-01-geologists-mystery-green-river-uphill.html
140•defrost•12h ago•36 comments

Pretty soon, heat pumps will be able to store and distribute heat as needed

https://www.sintef.no/en/latest-news/2026/pretty-soon-heat-pumps-will-be-able-to-store-and-distri...
143•PaulHoule•1d ago•118 comments

Stelvio: Ship Python to AWS

https://github.com/stelviodev/stelvio
28•todsacerdoti•5h ago•39 comments

IsoCoaster – Theme Park Builder

https://iso-coaster.com/
100•duck•3d ago•23 comments

Why software stocks are getting pummelled

https://www.economist.com/business/2026/02/01/why-software-stocks-are-getting-pummelled
139•petethomas•21h ago•198 comments

UK government launches fuel forecourt price API

https://www.gov.uk/guidance/access-the-latest-fuel-prices-and-forecourt-data-via-api-or-email
90•Technolithic•12h ago•103 comments

Nvidia shares are down after report that its OpenAI investment stalled

https://www.cnbc.com/2026/02/02/nvidia-stock-price-openai-funding.html
103•greatgib•5h ago•39 comments

Show HN: Adboost – A browser extension that adds ads to every webpage

https://github.com/surprisetalk/AdBoost
93•surprisetalk•12h ago•108 comments

General Graboids: Worms and Remote Code Execution in Command and Conquer

https://www.atredis.com/blog/2026/1/26/generals
29•speckx•6d ago•5 comments
Open in hackernews

Comparing Parallel Functional Array Languages: Programming and Performance

https://arxiv.org/abs/2505.08906
91•vok•8mo ago

Comments

yubblegum•8mo ago
Chapel got a mention in the 'Related Work' section. I looked at it a few years ago and found it compelling (but I don't do HPC so it was just window watching). What's the HN feedback on Chapel?

https://chapel-lang.org/

marai2•8mo ago
If you scroll down on the Chapel-lang website, there seems to be a lot of activity happening with this language. There is even going to be a ChapelCon 2025.

https://chapel-lang.org/blog/posts/chapelcon25-announcement/

throwaway17_17•8mo ago
Chapel and Lustre (a parallel, distributed file system) from Cray were funded by DARPA’s High Productivity Computing Systems program. This work, along with Fortress, from Sun, were developed explicitly to enable and ‘simplify’ the programming of distributed “supercomputers”. The work and artifacts, along with the published documentation and research is of particularly high quality.

Even if you aren’t involved in HPC I’d say the concepts transfer or provide a great basis for parallel and distributed idioms and methodologies that can be adapted to existing languages or used in development of new languages.

TL;DR - Chapel is cool and if you are interested in the general subject matter (despite a different focus) Fortress, which is discontinued, should also be checked out.

bradcray•8mo ago
@yubblegum: I'm unfairly biased towards Chapel (positively), so won't try to characterize HN's opinion on it. But I did want to note that while Chapel's original and main reason for being is HPC, now that everyone lives in a parallel-computing world, users also benefits from using Chapel in desktop environments where they want to do multicore and/or GPU programming. One such example is covered in this interview with an atmospheric science researcher for whom it has replaced Python as his go-to desktop language: https://chapel-lang.org/blog/posts/7qs-dias/
yubblegum•8mo ago
Thank you Brad! I was in fact wondering about GPU use myself. Does it work with Apple's M# GPUs?

Btw, I was looking at the docs for GPU [1] and unsolicited feedback from a potential user is that the setup process needs to become less painful. For example, yesterday installed it via brew but then hit the setup page for GPU and noted I now needed to build from source.

(Back in the day, one reason some of Sun's Java efforts to extend Java's fieddom faltered was because of the friction of setup for (iirc) things like Applets, etc. I think Chapel deserves a far wider audiance.)

[1]: https://chapel-lang.org/docs/technotes/gpu.html#setup (for others - you obviously know the link /g)

p.s. just saw your comment from last year - dropping it here for others: https://news.ycombinator.com/item?id=39032481

bradcray•8mo ago
@yubblegum: I'm afraid we don't have an update on support for Apple GPUs since last year's comment. While it comes up from time-to-time, nobody has opened an issue for it yet (please feel encouraged to!), and it isn't something we've had the chance to prioritize, where a lot of our recent work has focused on improving tooling support and addressing user requests.

I'll take your feedback about simplifying GPU-based installs back to our team, and have noted it on this thematically related issue: https://github.com/chapel-lang/chapel/issues/25187#issuecomm...

munchler•8mo ago
Are these languages pure in the functional sense? E.g. Do they allow/encourage mutation? My understanding is that APL permits mutable state and side effects, but maybe they are rarely used in practice? If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.
zfnmxt•8mo ago
Futhark, SaC, and Accelerate have purely functional semantics. Futhark has something called "in-place updates" that operationally mutate the given array, but semantically they work as if a new array is created (and are statically guaranteed to work this way by the type system).
RodgerTheGreat•8mo ago
APL arrays are values in the same sense as value types in any functional language. You don't explicitly modify arrays in-place; if they happen to have a refcount of 1 operations may happen in-place as an optimization, but not in a manner which observably alters program behavior.
grg0•8mo ago
Accelerate is a Haskell library/eDSL.
axman6•8mo ago
I wasn’t expecting to personally know two of the authors, but having Accelerate included makes sense.
geocar•8mo ago
> My understanding is that APL permits mutable state and side effects ... If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.

      a←'hello'
      a[1]←'c'
This does _not_ modify the array in-place. It's actually the same as:

     a←'hello'
     a←'c'@1⊢a
which is more obviously functional. It is easy to convince yourself of this:

      a←'hello'
      b←a
      b[1]←'j'
      a,b
returns 'hellojello' and not 'jellojello'.
teleforce•8mo ago
Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.

D language have excellent support functional and array features with parallel support. On top that not known to others it has high performance native BLAS kind of library with ergonomic and intuitiveness similar to python [1].

[1] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen (2016):

http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...

zfnmxt•8mo ago
> Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.

That's incorrect. Futhark doesn't even have linear algebra primitives---everything has to be done in terms of map/reduce/etc: https://github.com/diku-dk/linalg/blob/master/lib/github.com...

tomsmeding•8mo ago
The same holds for Accelerate, and I'm fairly sure also SaC and APL. DaCe even gets a special mention in the paper in section 10.5 stating that they specifically _do_ use BLAS bindings.
joe_the_user•8mo ago
"Notice that all the all the languages mentioned depends on the external BLAS library". I didn't notice this 'cause I don't think it's true. For example, it highly implausible that APL[1] would depend on BLAS[2] considering APL predates BLAS by 5-10 years ("developed in the sixties" versus "between 1971 and 1973"). I don't think Futhark uses BLAS either but in modern stupidity, this currently two hour old parent has taken over Google results so it's hard to find references.

[1] https://en.wikipedia.org/wiki/APL_(programming_language)

[2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogra...

DrNosferatu•8mo ago
Matlab supposedly is “portable APL”.
DrNosferatu•8mo ago
the man who invented MATLAB, Cleve Moler said: [I’ve] always seen MATLAB as “portable APL”. [1]

…why the downvoting?

[1] - https://computinged.wordpress.com/2012/06/14/matlab-and-apl-...

beagle3•8mo ago
I didn't downvote, but ... as someone who used both, this statement seems nonsensical.

APL is mathematical notation that is also executable. It is all about putting a mathematical algorithm in a succinct, terse way.

MATLAB is a clunky Fortran-like language that does simple 2D matrix stuff reasonably terse (though not remotely as terse as APL), and does everything else horribly awkwardly and verbosely.

Modern MATLAB might be comparable to 1960s APL, but original MATLAB was most certainly not, and even modern MATLAB isn't comparable to modern APL (and its successors such as BQN and K)

devlovstad•8mo ago
I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA. While I have not used any of these languages since, I have used JAX[1] quite a lot, where the learnings from this course have been quite helpful. Many people will end up writing code for GPUs through different levels of abstraction, but those who are able to reason about the semantics through functional primitives might have an easier time understanding what's happening under the hood.
vanderZwan•8mo ago
I think the intended footnote was accidentally left out. Were you talking about this Python library?

https://docs.jax.dev/en/latest/index.html

tough•8mo ago
There's a JAX for AI/LM too

https://github.com/jax-ml/jax

but yeah no idea which the OP meant

zfnmxt•8mo ago
> I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA.

PMPH? :)