frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•gozzoo•31s ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•41s ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
1•tosh•1m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•2m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•7m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•9m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•12m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•13m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•14m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•15m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•15m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•16m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•16m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•19m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•23m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•23m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•28m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•29m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•30m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•33m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•35m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•36m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•36m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•36m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•38m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•39m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•42m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•44m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•44m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•45m ago•1 comments
Open in hackernews

Comparing Parallel Functional Array Languages: Programming and Performance

https://arxiv.org/abs/2505.08906
91•vok•8mo ago

Comments

yubblegum•8mo ago
Chapel got a mention in the 'Related Work' section. I looked at it a few years ago and found it compelling (but I don't do HPC so it was just window watching). What's the HN feedback on Chapel?

https://chapel-lang.org/

marai2•8mo ago
If you scroll down on the Chapel-lang website, there seems to be a lot of activity happening with this language. There is even going to be a ChapelCon 2025.

https://chapel-lang.org/blog/posts/chapelcon25-announcement/

throwaway17_17•8mo ago
Chapel and Lustre (a parallel, distributed file system) from Cray were funded by DARPA’s High Productivity Computing Systems program. This work, along with Fortress, from Sun, were developed explicitly to enable and ‘simplify’ the programming of distributed “supercomputers”. The work and artifacts, along with the published documentation and research is of particularly high quality.

Even if you aren’t involved in HPC I’d say the concepts transfer or provide a great basis for parallel and distributed idioms and methodologies that can be adapted to existing languages or used in development of new languages.

TL;DR - Chapel is cool and if you are interested in the general subject matter (despite a different focus) Fortress, which is discontinued, should also be checked out.

bradcray•8mo ago
@yubblegum: I'm unfairly biased towards Chapel (positively), so won't try to characterize HN's opinion on it. But I did want to note that while Chapel's original and main reason for being is HPC, now that everyone lives in a parallel-computing world, users also benefits from using Chapel in desktop environments where they want to do multicore and/or GPU programming. One such example is covered in this interview with an atmospheric science researcher for whom it has replaced Python as his go-to desktop language: https://chapel-lang.org/blog/posts/7qs-dias/
yubblegum•8mo ago
Thank you Brad! I was in fact wondering about GPU use myself. Does it work with Apple's M# GPUs?

Btw, I was looking at the docs for GPU [1] and unsolicited feedback from a potential user is that the setup process needs to become less painful. For example, yesterday installed it via brew but then hit the setup page for GPU and noted I now needed to build from source.

(Back in the day, one reason some of Sun's Java efforts to extend Java's fieddom faltered was because of the friction of setup for (iirc) things like Applets, etc. I think Chapel deserves a far wider audiance.)

[1]: https://chapel-lang.org/docs/technotes/gpu.html#setup (for others - you obviously know the link /g)

p.s. just saw your comment from last year - dropping it here for others: https://news.ycombinator.com/item?id=39032481

bradcray•8mo ago
@yubblegum: I'm afraid we don't have an update on support for Apple GPUs since last year's comment. While it comes up from time-to-time, nobody has opened an issue for it yet (please feel encouraged to!), and it isn't something we've had the chance to prioritize, where a lot of our recent work has focused on improving tooling support and addressing user requests.

I'll take your feedback about simplifying GPU-based installs back to our team, and have noted it on this thematically related issue: https://github.com/chapel-lang/chapel/issues/25187#issuecomm...

munchler•8mo ago
Are these languages pure in the functional sense? E.g. Do they allow/encourage mutation? My understanding is that APL permits mutable state and side effects, but maybe they are rarely used in practice? If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.
zfnmxt•8mo ago
Futhark, SaC, and Accelerate have purely functional semantics. Futhark has something called "in-place updates" that operationally mutate the given array, but semantically they work as if a new array is created (and are statically guaranteed to work this way by the type system).
RodgerTheGreat•8mo ago
APL arrays are values in the same sense as value types in any functional language. You don't explicitly modify arrays in-place; if they happen to have a refcount of 1 operations may happen in-place as an optimization, but not in a manner which observably alters program behavior.
grg0•8mo ago
Accelerate is a Haskell library/eDSL.
axman6•8mo ago
I wasn’t expecting to personally know two of the authors, but having Accelerate included makes sense.
geocar•8mo ago
> My understanding is that APL permits mutable state and side effects ... If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.

      a←'hello'
      a[1]←'c'
This does _not_ modify the array in-place. It's actually the same as:

     a←'hello'
     a←'c'@1⊢a
which is more obviously functional. It is easy to convince yourself of this:

      a←'hello'
      b←a
      b[1]←'j'
      a,b
returns 'hellojello' and not 'jellojello'.
teleforce•8mo ago
Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.

D language have excellent support functional and array features with parallel support. On top that not known to others it has high performance native BLAS kind of library with ergonomic and intuitiveness similar to python [1].

[1] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen (2016):

http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...

zfnmxt•8mo ago
> Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.

That's incorrect. Futhark doesn't even have linear algebra primitives---everything has to be done in terms of map/reduce/etc: https://github.com/diku-dk/linalg/blob/master/lib/github.com...

tomsmeding•8mo ago
The same holds for Accelerate, and I'm fairly sure also SaC and APL. DaCe even gets a special mention in the paper in section 10.5 stating that they specifically _do_ use BLAS bindings.
joe_the_user•8mo ago
"Notice that all the all the languages mentioned depends on the external BLAS library". I didn't notice this 'cause I don't think it's true. For example, it highly implausible that APL[1] would depend on BLAS[2] considering APL predates BLAS by 5-10 years ("developed in the sixties" versus "between 1971 and 1973"). I don't think Futhark uses BLAS either but in modern stupidity, this currently two hour old parent has taken over Google results so it's hard to find references.

[1] https://en.wikipedia.org/wiki/APL_(programming_language)

[2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogra...

DrNosferatu•8mo ago
Matlab supposedly is “portable APL”.
DrNosferatu•8mo ago
the man who invented MATLAB, Cleve Moler said: [I’ve] always seen MATLAB as “portable APL”. [1]

…why the downvoting?

[1] - https://computinged.wordpress.com/2012/06/14/matlab-and-apl-...

beagle3•8mo ago
I didn't downvote, but ... as someone who used both, this statement seems nonsensical.

APL is mathematical notation that is also executable. It is all about putting a mathematical algorithm in a succinct, terse way.

MATLAB is a clunky Fortran-like language that does simple 2D matrix stuff reasonably terse (though not remotely as terse as APL), and does everything else horribly awkwardly and verbosely.

Modern MATLAB might be comparable to 1960s APL, but original MATLAB was most certainly not, and even modern MATLAB isn't comparable to modern APL (and its successors such as BQN and K)

devlovstad•8mo ago
I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA. While I have not used any of these languages since, I have used JAX[1] quite a lot, where the learnings from this course have been quite helpful. Many people will end up writing code for GPUs through different levels of abstraction, but those who are able to reason about the semantics through functional primitives might have an easier time understanding what's happening under the hood.
vanderZwan•8mo ago
I think the intended footnote was accidentally left out. Were you talking about this Python library?

https://docs.jax.dev/en/latest/index.html

tough•8mo ago
There's a JAX for AI/LM too

https://github.com/jax-ml/jax

but yeah no idea which the OP meant

zfnmxt•8mo ago
> I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA.

PMPH? :)