frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
624•klaussilveira•12h ago•182 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
926•xnx•18h ago•548 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
32•helloplanets•4d ago•24 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
109•matheusalmeida•1d ago•27 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
9•kaonwarb•3d ago•7 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
40•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
219•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
210•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
322•vecti•15h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
370•ostacke•18h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
358•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
477•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
272•eljojo•15h ago•160 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
402•lstoll•19h ago•271 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•20 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
14•jesperordrup•2h ago•6 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
3•theblazehen•2d ago•0 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
12•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
244•i5heu•15h ago•188 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
52•gfortaine•10h ago•21 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
140•vmatsiiako•17h ago•63 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
280•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1058•cdrnsf•22h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
132•SerCe•8h ago•117 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
176•limoce•3d ago•96 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•22 comments
Open in hackernews

No More Shading Languages: Compiling C++ to Vulkan Shaders [pdf]

https://xol.io/random/vcc-paper.pdf
49•pjmlp•7mo ago

Comments

rgbforge•7mo ago
The section discussing Slang is interesting, I didn't know that function pointers were only available for Cuda targets.
raincole•7mo ago
Yeah, C++ is the peak language design that everyone loves...
reactordev•7mo ago
In game dev they definitely do.
arjonagelhout•7mo ago
I think this is indeed the advantage of this paper taking C++ as the language to compile to SPIR-V.

Game engines and other large codebases with graphics logic are commonly written in C++, and only having to learn and write a single language is great.

Right now, shaders -- if not working with an off-the-shelf graphics abstraction -- are kind of annoying to work with. Cross-compiling to GLSL, HLSL and Metal Shading Language is cumbersome. Almost all game engines create their own shading language and code generate / compile that to the respective shading languages for specific platforms.

This situation could be improved if GPUs were more standardized and didn't have proprietary instruction sets. Similar to how CPUs mainly have x86_64 and ARM64 as the dominant instruction sets.

monkeyelite•7mo ago
The problem with C++ isn't that core features are broken. It's that it has so many features and modes and a sprawling standard library, because of it.

The alleged goal here is to match syntax of other parts of the program, and those tend to be written in C++.

feelamee•7mo ago
> that core features are broken

can you please explain or link some sources about this?

btw, is C++ STD really bloated? There are a lot of languages that mess in std much more stuff. E.g. python. A lot of people complaining about the lack of many library features - networking, reflection, <expected> and <optional> was added too late and so on.

monkeyelite•7mo ago
Your quote is missing a key word.
pjmlp•7mo ago
While it has its issues, and it seems WG21 lost direction on where to drive C++, on the games, graphics and VFX industries, another language will have a very hard time imposing themselves.

Java and C# only did thanks to tooling, the unavoidable presence on Android, previously J2ME, the market success with Minecraft, XNA and Unity.

Anything else that wants to take on C and C++ for those industries had to come up with similar unavoidable tooling.

Calavar•7mo ago
I've seen a few projects along the lines of shader programming in C++, shader programming in Rust, etc., but I'm not sure that I understand the point. There's a huge impedence mismatch between CPU and GPU, and if you port CPU centric code to GPU naively, it's easy to get code that slower than the CPU version thanks to the leaky abstraction. And I'm not sure you can argue pareto principle: Because if you had a scenario where 80% of the code is not performance sensitive, why would you port it to GPU in the first place?

Anyway, there's a good chance that I'm missing something here because there seems to be a lot of interest in writing shaders in CPU centric languages.

jcelerier•7mo ago
It's very common to write c++ in a way that will work well for GPUs. Consider that CUDA, the most used GPU language, is just a set of extensions on top of c++. Likewise for Metal shaders, or high-level dogs synthesis systems like Vitis
ranger_danger•7mo ago
high-level.. dogs?
chaboud•7mo ago
I’m going to guess that they meant Directed Acyclic Graphs or DAGs, which is a useful way to represent data dependencies and transformations, allowing formulation for GPU, CPU, NNA, DSP, FPGA, etc.

If the macrostructure of the operations can be represented appropriately, automatic platform-specific optimization is more approachable.

ruined•7mo ago
yes, dogs. very high level, best-of-the-best. the elite. directed ocyclic graphs
Pseudoboss•7mo ago
The goodest boys.
canyp•7mo ago
I'm pretty sure he meant dawgs. Directed acyclic woof graphs.
jcelerier•7mo ago
wops! FPGAs*
pjmlp•7mo ago
People keep repeating this wrongly.

CUDA is a polyglot development stack for compute, with first party support for C, C++, Fortran, Python JIT DSL, and anything PTX. With the hardware semantics, nowadays following the C++ memory model, although it wasn't originally designed that way.

As NVidia blessed extensions for compiler backends targeting PTX, there are Haskell, .NET, Java, Julia tooling.

For whatever reason, all of that keeps being forgotten and only either C or C++ gets a mention, which is the same mistake Intel and AMD keep doing on the CUDA porting kits.

arjonagelhout•7mo ago
What is the main difference in shading languages vs. programming languages such as C++?

Metal Shading Language for example uses a subset of C++, and HLSL and GLSL are C-like languages.

In my view, it is nice to have an equivalent syntax and language for both CPU and GPU code, even though you still want to write simple code for GPU compute kernels and shaders.

hgs3•7mo ago
I would expect a shading language to provide specialized features for working with GPU resources and intrinsic operations.
pjmlp•7mo ago
The language extensions for GPU semantics and code distribution required in C and C++.

The difference is that shader languages have a specific set of semantics, while the former still have to worry about ISO standard semantics, coupled with the extensions and broken expectations when the code takes another execution semantics from what a regular C or C++ developer would expect.

raincole•7mo ago
> CPU centric languages.

What does a "GPU centric language" look like?

The most commonly used languages in terms of GPU:

- CUDA: C++ like

- OpenCL: C like

- HLSL/GLSL: C like

arjonagelhout•7mo ago
To add to this list, Apple has MSL, which uses a subset of C++
bsder•7mo ago
Annoyingly, everything is converging to C++-ish via Slang now that DirectX supports SPIR-V.

OpenCL and GLSL might as well be dead given the vast difference in development resources between them and HLSL/Slang. Slang is effectively HLSL++.

Metal is the main odd man out, but is C++-like.

pjmlp•7mo ago
Slang is inspired in C#, , beyond the HLSL common subset, whereas HLSL is moving more towards C++ feature.

The module system, generics and operators definitions.

corysama•7mo ago
CUDA is full-on C++20. The trick is learning how to write C++ that works with the hardware instead of against it.
pjmlp•7mo ago
Minus modules though.
Calavar•7mo ago
C++ is "C like" and uses manual memory management. The major idiom is RAII, which is based on deterministic destructor execution.

Java is "C like" and uses garbage collection for dynamic memory management. It doesn't have determistic destructors. The major idiom is inheritance and overriding virtual methods.

GLSL is "C like" and doesn’t even support dynamic memory allocation, manual or otherwise. The major idiom is an implicit fixed function pipeline that executes around your code - you don't write the whole program.

So what does "C like" actually mean? IMHO it refers to superficial syntax elements like curly braces, return type before the function name, prefix and postfix increment operators, etc. It tells you almost nothing about the semantics, which is the part that determines how code in that language will map to CPU machine code vs. a GPU IR like SPIR-V. For example, CUDA is based on C++ but it has to introduce a new memory model to match the realities of GPU silicon.

tombh•7mo ago
One answer is simply that the tooling is better: test frameworks, linters, LSPs, even just including other files and syntax highlighting are better.
mabster•7mo ago
I haven't done a lot of shader programming, just modified stuff occasionally.

But one thing I miss in C++ compared to shaders is all the vector sizzling, like v.yxyx. I couldn't really see how they handle vectors but might have missed it.

pjmlp•7mo ago
There are libraries for that, like GLM.
mabster•7mo ago
Unfortunately GLM doesn't use SIMD instructions.

I really wanted something that's compatible with shaders and fast so we can quickly swap between CPU and GPU because it was time consuming to port the code.

I've been down this road before. If you aren't doing SIMD it's pretty easy to implement but relies on UB that works on all the compilers I tried (C++ properties would make this better and portable). I got something working with SIMD that unfortunately doesn't compile correctly on Clang!

Cieric•7mo ago
But GLM does have support for SIMD [1] or do you mean that it doesn't support specific instructions under SIMD?

[1] https://glm.g-truc.net/0.9.1/api/a00285.html

mabster•7mo ago
Totally missed that, thanks! I don't know the library very well, but as far as I can tell they don't support .xyxy style sizzling with the SIMD Vec4 type.

They're using the same "proxy object" method I was doing for their sizzling which I'm pretty sure won't work with SIMD types but would love to be proven wrong!

I haven't deep dived into the library as I'm no longer doing this kind of code.

cpgxiii•7mo ago
Sometimes, even if you know you're starting with somewhat suboptimal performance, the ability to use CPU code you've already written and tested on the GPU is very valuable.

Many years ago (approx 2011-2012) my own introduction to CUDA came by way of a neat .NET library Cudafy that allowed you to annotate certain methods in your C# code for GPU execution. Obviously the subset of C# that could be supported was quite small, but it was "the same" code you could use elsewhere, so you could test (slowly) the nominal correctness of your code on CPU first. Even now the GPU tooling/debugging is not as good, and back then it was way worse, so being able to debug/test nearly identical code on CPU first was a big help. Of course sometimes the abstraction broke down and you ended up having to look at the generated CUDA source, but that was pretty rare.

CreepGin•7mo ago
This was many years ago, after Unity released mathematics and burst. I was porting (part of) my CPU toy pathtracer to a compute shader. At one point, I literally just copy-pasted chunks of my CPU code straight into an HLSL file, fully expecting it to throw some syntax errors or need tweaks. But nope. It ran perfectly, no changes needed. It felt kinda magical and made me realize I could actually debug stuff on the CPU first, then move it over to the GPU with almost zero hassle.

For folks who don't know: Unity.Mathematics is a package that ships a low-level math library whose types (`float2`, `float3`, `float4`, `int4x4`, etc.) are a 1-to-1 mirror of HLSL's built-in vector and matrix types. Because the syntax, swizzling, and operators are identical, any pure-math function you write in C# compiles under Burst to SIMD-friendly machine code on the CPU and can be dropped into a `.hlsl` file with almost zero edits for the GPU.

itronitron•7mo ago
>> There's a huge impedence mismatch between CPU and GPU

That's already been worked out to some extent with libraries such as Aparapi, although you still need to know what you're doing, and to actually need it.

https://aparapi.github.io/

Aparapi allows Java developers to take advantage of the compute power of GPU and APU devices by executing data parallel code fragments on the GPU rather than being confined to the local CPU. It does this by converting Java bytecode to OpenCL at runtime and executing on the GPU, if for any reason Aparapi can't execute on the GPU it will execute in a Java thread pool.

ethan_smith•7mo ago
The value isn't in porting CPU-centric code, but in shared abstractions, tooling, and language familiarity that reduce context switching costs when developing across the CPU/GPU boundary.
hmcq6•7mo ago
From my perspective I just want better DevEx.

C++ DevEx is significantly better than ISF despite them looking very similar and it seems like less of a hurdle to get C++ to spit out an ISF compatible file than it is to build all the tools for ISF (and GLSL, HLSL, WGSL)

genidoi•7mo ago
GLSL is fine. People don't understand that shaders are not just programs but literal works of art[0]. The art comes from the ability to map a canvas's (x,y) -> (r,g,b,a) coordinates in real time to create something mesmerising, and then let anyone remix the code to create something new from the browser.

With IV code, that goes out the way.

[0] examples Matrix 3D shader: https://www.shadertoy.com/view/4t3BWl - Very fast procedural ocean: https://www.shadertoy.com/view/4dSBDt

pixelpoet•7mo ago
> GLSL is fine.

How would you use shared/local memory in GLSL? What if you want to implement Kahan summation, is that possible? How's the out-of-core and multi-GPU support in GLSL?

> People don't understand

Careful pointing that finger, 4 fingers might point back... Shadertoy isn't some obscure thing no one has heard of, some of us are in the demoscene since over 20 years :)

genidoi•7mo ago
I don't know x3

> some of us are in the demoscene since over 20 years :)

Demoscene is different, though what I'm imagining with shadertoy and what it could be hasn't really been implemented. GLSL shaders are fully obscure outside of dev circles and that's a bummer.

exDM69•7mo ago
> How would you use shared/local memory in GLSL?

In compute shaders the `shared` keyword is for this.

pjmlp•7mo ago
GLSL is dead for pratical purposes, Khronos acknowledged at Vulkanised 2024 that no one is working on either improving it, or keeping up with new Vulkan features.

Hence why most companies are either using HLSL, even outside games industry, or adoption the new kid on the block Slang, which NVidia offered to Khronos as GLSL replacement.

So GLSL remains for OpenGL and WebGL and that is about it.

fuhsnn•7mo ago
> our renderer currently does not use any subgroup intrinsics. This is partly due to how LLVM does not provide us with the structured control flow we would need to implement Maximal Reconvergence. Augmenting the C language family with such a model and implementing it in a GPU compiler should be a priority in future research.

Sounds like ispc fits the bill: https://ispc.github.io/ispc.html#gang-convergence-guarantees

danybittel•7mo ago
> Unfortunately, Shader programs are currently restricted to the Logical model, which disallows all of this.

That is not entirely true, you can use phyiscal pointers with the "Buffer device address" feature. (https://docs.vulkan.org/samples/latest/samples/extensions/bu...) It was an extension, but now part of Vulkan. It is widely available on most GPUS.

This only works in buffers though. Not for images or local arrays.

pjmlp•7mo ago
Not on mobile Android powered ones.
danybittel•7mo ago
It should be, it is part of 1.2. (https://vulkan.gpuinfo.org/listfeaturescore12.php the first entry bufferDeviceAddress, supported by 97.89%)

Or did you mean some specific feature? I haven't used it on mobile.

pjmlp•7mo ago
Supported as it actually works, or gets listed as something the driver knows about, but full of issues when it gets used?

There is a reason why there are some Vulkanised 2025 about improving the state of Vulkan affairs on Android.

canyp•7mo ago
This is great news for any graphics programmer. The CUDA model needs to be standardized. Programming the GPU by compiling a shader program that exists separately from the rest of the source code is very 1990.
makotech221•7mo ago
Stride3d engine (https://www.stride3d.net/features/#graphics) has something kinda of similar; it allows writing in shaders in (nearly)c# and having them compile to GLSL