frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: The government of my country blocked VPN access. What should I use?

700•rickybule•8h ago•401 comments

Python: The Documentary

https://lwn.net/Articles/1035537/
28•chmaynard•1h ago•2 comments

Fuck up my site – Turn any website into beautiful chaos

https://www.fuckupmysite.com/?url=https%3A%2F%2Fnews.ycombinator.com&torchCursor=true&comicSans=t...
130•coloneltcb•3h ago•40 comments

Some thoughts on LLMs and software development

https://martinfowler.com/articles/202508-ai-thoughts.html
174•floverfelt•6h ago•159 comments

My startup banking story (2023)

https://mitchellh.com/writing/my-startup-banking-story
155•dvrp•5h ago•66 comments

Uncertain<T>

https://nshipster.com/uncertainty/
237•samtheprogram•7h ago•50 comments

Death by PowerPoint: the slide that killed seven people

https://mcdreeamiemusings.com/blog/2019/4/13/gsux1h6bnt8lqjd7w2t2mtvfg81uhx
44•scapecast•3h ago•10 comments

Expert LSP the official language server implementation for Elixir

https://github.com/elixir-lang/expert
46•pimienta•3h ago•8 comments

RSS Is Awesome

https://evanverma.com/rss-is-awesome
58•edverma2•1h ago•12 comments

Building your own CLI coding agent with Pydantic-AI

https://martinfowler.com/articles/build-own-coding-agent.html
102•vinhnx•6h ago•21 comments

TuneD is a system tuning service for Linux

https://tuned-project.org/
27•tanelpoder•3d ago•8 comments

Are OpenAI and Anthropic losing money on inference?

https://martinalderson.com/posts/are-openai-and-anthropic-really-losing-money-on-inference/
431•martinald•14h ago•414 comments

AI adoption linked to 13% decline in jobs for young U.S. workers: study

https://www.cnbc.com/2025/08/28/generative-ai-reshapes-us-job-market-stanford-study-shows-entry-l...
169•pseudolus•10h ago•260 comments

Launch HN: Dedalus Labs (YC S25) – Vercel for Agents

43•windsor•8h ago•11 comments

Rupert's Property

https://johncarlosbaez.wordpress.com/2025/08/28/a-polyhedron-without-ruperts-property/
19•robinhouston•2h ago•1 comments

A forgotten medieval fruit with a vulgar name (2021)

https://www.bbc.com/future/article/20210325-the-strange-medieval-fruit-the-world-forgot
65•ohjeez•1d ago•27 comments

Dependent types I › Universes, or types of types

https://www.jonmsterling.com/01ET/index.xml
7•matt_d•1d ago•0 comments

Bad Craziness

https://www.math.columbia.edu/~woit/wordpress/?p=15191
13•jjgreen•1h ago•2 comments

You no longer need JavaScript: an overview of what makes modern CSS so awesome

https://lyra.horse/blog/2025/08/you-dont-need-js/
82•todsacerdoti•4h ago•30 comments

Thrashing

https://exple.tive.org/blarg/2025/08/26/thrashing/
12•pch00•1d ago•1 comments

Speed-coding for the 6502 – a simple example

https://www.colino.net/wordpress/en/archives/2025/08/28/speed-coding-for-the-6502-a-simple-example/
18•mmphosis•3h ago•7 comments

Will AI Replace Human Thinking? The Case for Writing and Coding Manually

https://www.ssp.sh/brain/will-ai-replace-humans/
110•articsputnik•10h ago•90 comments

VLT observations of interstellar comet 3I/ATLAS II

https://arxiv.org/abs/2508.18382
44•bikenaga•6h ago•32 comments

Optimising for maintainability – Gleam in production at Strand

https://gleam.run/case-studies/strand/
87•Bogdanp•9h ago•21 comments

Show HN: SwiftAI – open-source library to easily build LLM features on iOS/macOS

https://github.com/mi12labs/SwiftAI
52•mi12-root•11h ago•11 comments

Web Bot Auth

https://developers.cloudflare.com/bots/reference/bot-verification/web-bot-auth/
38•ananddtyagi•6h ago•37 comments

In Search of AI Psychosis

https://www.astralcodexten.com/p/in-search-of-ai-psychosis
85•venkii•2d ago•48 comments

RFC 8594: The Sunset HTTP Header Field (2019)

https://datatracker.ietf.org/doc/html/rfc8594
24•aiven•5h ago•9 comments

I researched every attempt to stop fascism in history. The success rate is 0%

https://cmarmitage.substack.com/p/i-researched-every-attempt-to-stop
11•rbanffy•38m ago•8 comments

That boolean should probably be something else

https://ntietz.com/blog/that-boolean-should-probably-be-something-else/
84•vidyesh•12h ago•94 comments
Open in hackernews

GPU Prefix Sums: A nearly complete collection

https://github.com/b0nes164/GPUPrefixSums
75•coffeeaddict1•12h ago
https://dl.acm.org/doi/10.1145/3694906.3743326

Comments

genpfault•11h ago
https://en.wikipedia.org/wiki/Prefix_sum#Applications
almostgotcaught•10h ago
this is missing the most important one (in today's world): extracting non-zero elements from a sparse vector/matrix

https://developer.nvidia.com/gpugems/gpugems3/part-vi-gpu-co...

merope14•9h ago
Not even close. The most important application (in today's world) is radix sort.
WJW•8h ago
What specific application do you have in mind that radix sort is more important than matrix multiplication?
otherjason•6h ago
I think they were trying to say “radix sort is a more important application of prefix sum than extraction of values from a sparse matrix/vector is.”
WJW•5h ago
I understand what GP meant, but extraction of values from a sparse matrix is an essential operation of multiplying two sparse matrices. Sparse matmult in turn is an absolutely fundamental operation in everything from weather forecasting to logistics planning to electric grid control to training LLMs. Radix sort on the other hand is very nice but (as far as I know) not nearly used as widely. Matrix multiplication is just super fundamental to the modern world.

I would love to be enlightened about some real-world applications of radix sort I may have missed though, since it's a cool algorithm. Hence my question above.

littlestymaar•3h ago
> to training LLMs

LLMs are made from dense matrices, aren't they?

WJW•3h ago
Not always, or rather not exclusively. For example, some types of distillation benefit from sparse-ifying the dense-ish matrices the original was made of [1]. There's also a lot of benefit to be had from sparsity in finetuning [2]. LLMs were merely one of the examples though, don't focus too much on them. The point was that sparse matmul makes up the bulk of scientific computations and a huge amount of industrial computations too. It's probably second only to the FFT in importance, so it would be wild if radix sort managed to eclipse it somehow.

[1] https://developer.nvidia.com/blog/mastering-llm-techniques-i...

[2] https://arxiv.org/html/2405.15525v1

almostgotcaught•2h ago
Almost all performant kernels employ structured sparsity
woadwarrior01•6h ago
Top K sampling comes to mind, although it's nowhere nearly as important as matmult.
almostgotcaught•6h ago
ranking models benefit from gpu impls of sort but yup they're not nearly as common/important as spmm/spmv
m-schuetz•6h ago
Is that relevant for 4x4 multiplications? Because at least for me, radix sort is way more important than multiplying matrices beyond 4x4. E.g. for Gaussian Splatting.
coffeeaddict1•10h ago
Related paper by the authors: https://dl.acm.org/doi/10.1145/3694906.3743326
dang•5h ago
We'll put that link in the top text too. Thanks!
m-schuetz•8h ago
That and https://github.com/b0nes164/GPUSorting have been a tremendous help for me, since CUB does not nicely work with the Cuda Driver Api. The author is doing amazing work.
luizfelberti•7h ago
This looks amazing, I've been shopping for an implementation of this I could play around with for a while now

They mention promising results on Apple Silicon GPUs and even cite the contributions from Vello, but I don't see a Metal implementation in there and the benchmark only shows results from an RTX 2080. Is it safe to assume that they're referring to the WGPU version when talking about M-series chips?