frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

No More Shading Languages: Compiling C++ to Vulkan Shaders [pdf]

https://xol.io/random/vcc-paper.pdf
35•pjmlp•2d ago

Comments

rgbforge•3h ago
The section discussing Slang is interesting, I didn't know that function pointers were only available for Cuda targets.
raincole•2h ago
Yeah, C++ is the peak language design that everyone loves...
reactordev•2h ago
In game dev they definitely do.
arjonagelhout•2h ago
I think this is indeed the advantage of this paper taking C++ as the language to compile to SPIR-V.

Game engines and other large codebases with graphics logic are commonly written in C++, and only having to learn and write a single language is great.

Right now, shaders -- if not working with an off-the-shelf graphics abstraction -- are kind of annoying to work with. Cross-compiling to GLSL, HLSL and Metal Shading Language is cumbersome. Almost all game engines create their own shading language and code generate / compile that to the respective shading languages for specific platforms.

This situation could be improved if GPUs were more standardized and didn't have proprietary instruction sets. Similar to how CPUs mainly have x86_64 and ARM64 as the dominant instruction sets.

monkeyelite•56m ago
The problem with C++ isn't that core features are broken. It's that it has so many features and modes and a sprawling standard library, because of it.

The alleged goal here is to match syntax of other parts of the program, and those tend to be written in C++.

Calavar•2h ago
I've seen a few projects along the lines of shader programming in C++, shader programming in Rust, etc., but I'm not sure that I understand the point. There's a huge impedence mismatch between CPU and GPU, and if you port CPU centric code to GPU naively, it's easy to get code that slower than the CPU version thanks to the leaky abstraction. And I'm not sure you can argue pareto principle: Because if you had a scenario where 80% of the code is not performance sensitive, why would you port it to GPU in the first place?

Anyway, there's a good chance that I'm missing something here because there seems to be a lot of interest in writing shaders in CPU centric languages.

jcelerier•2h ago
It's very common to write c++ in a way that will work well for GPUs. Consider that CUDA, the most used GPU language, is just a set of extensions on top of c++. Likewise for Metal shaders, or high-level dogs synthesis systems like Vitis
ranger_danger•1h ago
high-level.. dogs?
chaboud•1h ago
I’m going to guess that they meant Directed Acyclic Graphs or DAGs, which is a useful way to represent data dependencies and transformations, allowing formulation for GPU, CPU, NNA, DSP, FPGA, etc.

If the macrostructure of the operations can be represented appropriately, automatic platform-specific optimization is more approachable.

ruined•57m ago
yes, dogs. very high level, best-of-the-best. the elite. directed ocyclic graphs
Pseudoboss•8m ago
The goodest boys.
arjonagelhout•2h ago
What is the main difference in shading languages vs. programming languages such as C++?

Metal Shading Language for example uses a subset of C++, and HLSL and GLSL are C-like languages.

In my view, it is nice to have an equivalent syntax and language for both CPU and GPU code, even though you still want to write simple code for GPU compute kernels and shaders.

raincole•1h ago
> CPU centric languages.

What does a "GPU centric language" look like?

The most commonly used languages in terms of GPU:

- CUDA: C++ like

- OpenCL: C like

- HLSL/GLSL: C like

arjonagelhout•1h ago
To add to this list, Apple has MSL, which uses a subset of C++
bsder•1h ago
Annoyingly, everything is converging to C++-ish via Slang now that DirectX supports SPIR-V.

OpenCL and GLSL might as well be dead given the vast difference in development resources between them and HLSL/Slang. Slang is effectively HLSL++.

Metal is the main odd man out, but is C++-like.

corysama•1h ago
CUDA is full-on C++20. The trick is learning how to write C++ that works with the hardware instead of against it.
Calavar•25m ago
Java is "C like" and uses garbage collection for dynamic memory management. The major idiom is inheritance and overriding vritual methods.

C++ is "C like" and uses manual memory management. The major idiom is RAII.

GLSL is "C like" and doesn’t even support dynamic memory allocation, manual or otherwise. The major idiom is an implicit fixed function pipeline that executes around your code - you don't write the whole program.

So what does "C like" actually mean? IMHO it refers to superficial syntax elements like curly braces, return type before the function name, prefix and postfix increment and decrement operators, etc. It tells you almost nothing about the semantics, which is the part that determines how code in that language will map to CPU machine code vs. a GPU IR like SPIR-V. For example, CUDA looks like C++ but it has to introduce a new memory model to match the realities of GPU hardware.

tombh•1h ago
One answer is simply that the tooling is better: test frameworks, linters, LSPs, even just including other files and syntax highlighting are better.
mabster•1h ago
I haven't done a lot of shader programming, just modified stuff occasionally.

But one thing I miss in C++ compared to shaders is all the vector sizzling, like v.yxyx. I couldn't really see how they handle vectors but might have missed it.

cpgxiii•59m ago
Sometimes, even if you know you're starting with somewhat suboptimal performance, the ability to use CPU code you've already written and tested on the GPU is very valuable.

Many years ago (approx 2011-2012) my own introduction to CUDA came by way of a neat .NET library Cudafy that allowed you to annotate certain methods in your C# code for GPU execution. Obviously the subset of C# that could be supported was quite small, but it was "the same" code you could use elsewhere, so you could test (slowly) the nominal correctness of your code on CPU first. Even now the GPU tooling/debugging is not as good, and back then it was way worse, so being able to debug/test nearly identical code on CPU first was a big help. Of course sometimes the abstraction broke down and you ended up having to look at the generated CUDA source, but that was pretty rare.

itronitron•13m ago
>> There's a huge impedence mismatch between CPU and GPU

That's already been worked out to some extent with libraries such as Aparapi, although you still need to know what you're doing, and to actually need it.

https://aparapi.github.io/

Aparapi allows Java developers to take advantage of the compute power of GPU and APU devices by executing data parallel code fragments on the GPU rather than being confined to the local CPU. It does this by converting Java bytecode to OpenCL at runtime and executing on the GPU, if for any reason Aparapi can't execute on the GPU it will execute in a Java thread pool.

genidoi•1h ago
GLSL is fine. People don't understand that shaders are not just programs but literal works of art[0]. The art comes from the ability to map a canvas's (x,y) -> (r,g,b,a) coordinates in real time to create something mesmerising, and then let anyone remix the code to create something new from the browser.

With IV code, that goes out the way.

[0] examples Matrix 3D shader: https://www.shadertoy.com/view/4t3BWl - Very fast procedural ocean: https://www.shadertoy.com/view/4dSBDt

pixelpoet•1h ago
> GLSL is fine.

How would you use shared/local memory in GLSL? What if you want to implement Kahan summation, is that possible? How's the out-of-core and multi-GPU support in GLSL?

> People don't understand

Careful pointing that finger, 4 fingers might point back... Shadertoy isn't some obscure thing no one has heard of, some of us are in the demoscene since over 20 years :)

fuhsnn•46m ago
> our renderer currently does not use any subgroup intrinsics. This is partly due to how LLVM does not provide us with the structured control flow we would need to implement Maximal Reconvergence. Augmenting the C language family with such a model and implementing it in a GPU compiler should be a priority in future research.

Sounds like ispc fits the bill: https://ispc.github.io/ispc.html#gang-convergence-guarantees

danybittel•12m ago
> Unfortunately, Shader programs are currently restricted to the Logical model, which disallows all of this.

That is not entirely true, you can use phyiscal pointers with the "Buffer device address" feature. (https://docs.vulkan.org/samples/latest/samples/extensions/bu...) It was an extension, but now part of Vulkan. It is widely available on most GPUS.

This only works in buffers though. Not for images or local arrays.

canyp•5m ago
This is great news for any graphics programmer. The CUDA model needs to be standardized. Programming the GPU by compiling a shader program that exists separately from the rest of the source code is very 1990.

Samsung Embeds IronSource Spyware App on Phones Across WANA

https://smex.org/open-letter-to-samsung-end-forced-israeli-app-installations-in-the-wana-region/
54•the-anarchist•59m ago•14 comments

AbsenceBench: Language models can't tell what's missing

https://arxiv.org/abs/2506.11440
176•JnBrymn•5h ago•34 comments

Phoenix.new – Remote AI Runtime for Phoenix

https://fly.io/blog/phoenix-new-the-remote-ai-runtime/
406•wut42•13h ago•184 comments

Tiny Undervalued Hardware Companions (2024)

https://vermaden.wordpress.com/2024/03/21/tiny-undervalued-hardware-companions/
9•zdw•1h ago•0 comments

Harper – an open-source alternative to Grammarly

https://writewithharper.com
164•ReadCarlBarks•8h ago•39 comments

YouTube's new anti-adblock measures

https://iter.ca/post/yt-adblock/
252•smitop•11h ago•435 comments

Wiki Radio: The thrilling sound of random Wikipedia

https://www.monkeon.co.uk/wikiradio/
77•if-curious•6h ago•17 comments

AMD's Freshly-Baked MI350: An Interview with the Chief Architect

https://chipsandcheese.com/p/amds-freshly-baked-mi350-an-interview
61•pella•6h ago•31 comments

Visualizing environmental costs of war in Hayao Miyazaki's Nausicaä

https://jgeekstudies.org/2025/06/20/wilted-lands-and-wounded-worlds-visualizing-environmental-costs-of-war-in-hayao-miyazakis-nausicaa-of-the-valley-of-the-wind/
189•zdw•12h ago•55 comments

Show HN: Nxtscape – an open-source agentic browser

https://github.com/nxtscape/nxtscape
214•felarof•11h ago•159 comments

Learn You Galois Fields for Great Good (00)

https://xorvoid.com/galois_fields_for_great_good_00.html
18•signa11•3h ago•2 comments

I Dropped the Production Database on a Friday Night

https://vince.beehiiv.com/p/how-i-dropped-the-production-database-on-a-friday-night
17•vincejos•2d ago•14 comments

Show HN: Inspect and extract files from MSI installers directly in your browser

https://pymsi.readthedocs.io/en/latest/msi_viewer.html
88•rmast•8h ago•15 comments

College baseball, venture capital, and the long maybe

https://bcantrill.dtrace.org/2025/06/15/college-baseball-venture-capital-and-the-long-maybe/
138•bcantrill•4d ago•95 comments

No More Shading Languages: Compiling C++ to Vulkan Shaders [pdf]

https://xol.io/random/vcc-paper.pdf
35•pjmlp•2d ago•27 comments

Verified dynamic programming with Σ-types in Lean

https://tannerduve.github.io/blog/memoization-sigma/
59•rck•3d ago•19 comments

Alpha Centauri

https://www.filfre.net/2025/06/alpha-centauri/
100•doppp•11h ago•35 comments

Tuxracer.js play Tux Racer in the browser

https://github.com/ebbejan/tux-racer-js
93•retro_guy•11h ago•31 comments

Oklo, the Earth's Two-billion-year-old only Known Natural Nuclear Reactor (2018)

https://www.iaea.org/newscenter/news/meet-oklo-the-earths-two-billion-year-old-only-known-natural-nuclear-reactor
164•keepamovin•18h ago•82 comments

Rose-Gold-Tinted Liquid Glasses

https://lmnt.me/blog/rose-gold-tinted-liquid-glasses.html
20•mantia•1d ago•0 comments

Cracovians: The Twisted Twins of Matrices

https://marcinciura.wordpress.com/2025/06/20/cracovians-the-twisted-twins-of-matrices/
56•mci•11h ago•27 comments

A brief, incomplete, and mostly wrong history of robotics

https://generalrobots.substack.com/p/a-brief-incomplete-and-mostly-wrong
103•Bogdanp•4d ago•59 comments

Texas Sheriffs Crack Bitcoin ATM with Power Tools to Retrieve $32,000

https://decrypt.co/326308/texas-sheriffs-crack-bitcoin-atm-with-power-tools-to-retrieve-32000
27•croes•1h ago•13 comments

Smartphones: Parts of Our Minds? Or Parasites?

https://www.tandfonline.com/doi/full/10.1080/00048402.2025.2504070
54•cratermoon•8h ago•18 comments

A Python-first data lakehouse

https://www.bauplanlabs.com/blog/everything-as-python
102•akshayka•3d ago•30 comments

Dancing Naked on the Head of a Pin: The Early History of Microphotography

https://publicdomainreview.org/essay/dancing-naked-on-the-head-of-a-pin
47•crescit_eundo•2d ago•3 comments

The JAWS shark is public domain

https://ironicsans.ghost.io/how-the-jaws-shark-became-public-domain/
128•MBCook•8h ago•27 comments

Proba-3's first artificial solar eclipse

https://www.esa.int/Enabling_Support/Space_Engineering_Technology/Proba-3/Proba-3_s_first_artificial_solar_eclipse
15•sohkamyung•2d ago•5 comments

BYD begins testing solid-state EV batteries in the Seal

https://electrek.co/2025/06/20/byd-tests-solid-state-batteries-seal-ev-with-1000-miles-range/
111•toomuchtodo•7h ago•124 comments

Jürgen Schmidhuber:the Father of Generative AI Without Turing Award

http://www.jazzyear.com/article_info.html?id=1352
77•kleiba•9h ago•47 comments