frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
2•AlexeyBrin•2m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
1•machielrey•4m ago•0 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
2•tablets•8m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•11m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•13m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•13m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•14m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•19m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•25m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•26m ago•1 comments

Slop News - HN front page right now hallucinated as 100% AI SLOP

https://slop-news.pages.dev/slop-news
1•keepamovin•31m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•33m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
3•tosh•39m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•42m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•43m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
3•goranmoomin•47m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•48m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•49m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•52m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•54m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•55m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
5•1vuio0pswjnm7•57m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
3•1vuio0pswjnm7•59m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•1h ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•1h ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•1h ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
2•lembergs•1h ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments
Open in hackernews

AMD GPU Programming in Julia

https://amdgpu.juliagpu.org/dev/
26•pxl-th•9mo ago

Comments

Alifatisk•9mo ago
Very cool project and AMD graphics cards deserve this kind of work! Very well done. May I ask, is there any reason why one would focus themselfves on a single type of graphics card instead of relying on a library that works for other variants too? Is it because you get more fine grained control that you lose on a abstraction level?
pxl-th•9mo ago
Thanks!

> May I ask, is there any reason why one would focus themselfves on a single type of graphics card instead of relying on a library that works for other variants too?

AMDGPU.jl is actually one of the backends supported by Julia. We do support CUDA, Metal, Intel, OpenCL as well to a varying degree: https://github.com/JuliaGPU

Each GPU backend implements a common array interface and a way to compile Julia code for low-level kernels relying on the GPUCompiler infrastructure: https://github.com/JuliaGPU/GPUCompiler.jl

Once that is done, users can write code and low-level kernels (using KernelAbstractions.jl) in a backend-agnostic manner.

Here're some examples of packages that target multiple GPU backends in this way:

- Real-time gaussian splatting supporting AMD GPU & Nvidia GPUs (probably others as well with minor work): https://github.com/JuliaNeuralGraphics/GaussianSplatting.jl

- AcceleratedKernels.jl which is like STD library: https://github.com/JuliaGPU/AcceleratedKernels.jl

- NNop.jl implements Flash-Attention and other NN fused kernels: https://github.com/pxl-th/NNop.jl

- Flux.jl a Deep-Learning library: https://github.com/FluxML/Flux.jl

jamiejquinn•9mo ago
OP answered the Julia-specific part but I'll chime in with a solid, language-agnostic yes, there are still reasons a GPU dev might want to target specific hardware. GPU hardware at the moment is more vendor-specific than CPUs. Off the top of my head, major differences include specialised hardware like Nvidia's tensor cores, differences in common hardware like cache and register size, and niche features like Nvidia's combined grace-hopper machines or AMD's gpu-cpu hybrid MI300A.

Sophisticated high-level approaches (like Julia's) will be able to utilise/mitigate some of the differences but I don't think we're fully vendor-agnostic quite yet (and probably won't ever be at the evolving cutting edge).

pxl-th•9mo ago
Definitely, with backend agnostic code you can target only a common set of features. It's convenient to use where it makes sense as it reduces the complexity: 1 kernel for all backends. And you can actually go a long way with this without sacrificing too much of the performance.

But for squeezing maximum performance & using latest features you have to target each device individually.