frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

A Emoji Reverse Polish Notation Calculator Written in COBOL

https://github.com/ghuntley/cobol-emoji-rpn-calculator
2•ghuntley•3m ago•0 comments

I Shipped a macOS App Built by Claude Code

https://www.indragie.com/blog/i-shipped-a-macos-app-built-entirely-by-claude-code
1•phirschybar•5m ago•0 comments

AI Birthday Letter Blew Me Away: Google is ushering in era of custom chatbots

https://www.theatlantic.com/technology/archive/2025/07/google-drive-personalized-chatbot/683436/
1•labrador•5m ago•0 comments

Ask HN: Advice for Starting a Hacker Space?

5•pkdpic•8m ago•2 comments

Mirage: First AI-Native UGC Game Engine Powered by Real-Time World Model

https://blog.dynamicslab.ai
4•zhitinghu•13m ago•2 comments

Zig language and toolchain packaged as a deb for Debian and Ubuntu amd64/ARM64

https://github.com/clayrisser/debian-zig
2•clayrisser•14m ago•1 comments

'It's too late': David Suzuki says the fight against climate change is lost

https://www.ipolitics.ca/2025/07/02/its-too-late-david-suzuki-says-the-fight-against-climate-change-is-lost/
6•dluan•18m ago•0 comments

What Happened to the Creator of Valve's Forgotten Game – Gunman Chronicles

https://www.pcgamer.com/games/fps/what-happened-to-the-creator-of-gunman-chronicles-valves-forgotten-fps-my-relationship-with-gabe-didnt-really-go-that-great/
3•LarsDu88•18m ago•1 comments

IBM Quantum Success- Charles Tibedo's 127 qubit q-circuit w 70k Gates/20k Depth

https://twitter.com/CTibedo/status/1941606958143811765
2•GeometryKernel•22m ago•0 comments

A new way to conquer deterministic SEC filings

https://edgaranalyzer.com
2•louieteed•26m ago•0 comments

Show HN: D++lang – A new systems programming language with Python-like syntax

https://angel250511.github.io/D-/
2•jarbcopilot•28m ago•1 comments

Serving 200M requests per day with a CGI-bin

https://simonwillison.net/2025/Jul/5/cgi-bin-performance/
3•mustache_kimono•29m ago•0 comments

Soham Parekh breaks silence on defrauding companies, says he was forced to do it

https://timesofindia.indiatimes.com/world/us/im-not-proud-soham-parekh-breaks-silence-on-defrauding-companies-says-he-was-forced-to-do-it/articleshow/122235662.cms
2•romanhn•35m ago•0 comments

Discovery of ancient Roman shoes leaves a big impression

https://www.vindolanda.com/news/magna-shoes
2•geox•37m ago•0 comments

Xi Jinping's two-week absence sparks speculation of power shift within CCP

https://www.cnbctv18.com/world/chinese-president-xi-jinpings-two-week-absence-sparks-speculation-of-power-shift-within-ccp-report-19629056.htm
4•ivape•45m ago•3 comments

Only two islands in the world have population of more than 100M people

https://twitter.com/koridentetsu/status/1692831722159890752
2•matsuu•47m ago•0 comments

Britain is already a hot country. It should act like it

https://www.economist.com/britain/2025/07/03/britain-is-already-a-hot-country-it-should-act-like-it
4•_dain_•55m ago•7 comments

Science has changed, have you? Change is good

https://mnky9800n.substack.com/p/science-has-changed-have-you
2•Bluestein•1h ago•0 comments

Why Polyworking Is The Future Of Work And How To Become A Polyworker

https://www.forbes.com/sites/williamarruda/2024/11/05/why-polyworking-is-the-future-of-work-and-how-to-become-a-polyworker/
4•Anon84•1h ago•1 comments

Vine-like Systems and Malleability

https://nothingisnttrivial.com/vines.html
2•networked•1h ago•0 comments

Show HN: I Asked ChatGPT to Rebuild My Canvas Radial Menu in SVG

https://github.com/victorqribeiro/radialMenuSVG
2•atum47•1h ago•0 comments

Show HN: Quotatious – A Wordle and hangman inspired game

https://www.quotatious.com/
3•jcusch•1h ago•0 comments

Microsoft Music Producer

https://www.youtube.com/watch?v=EdL6b8ZZRLc
3•natebc•1h ago•0 comments

School Discipline Makes a Comeback

https://www.wsj.com/opinion/school-discipline-states-texas-arkansas-washington-covid-trump-obama-eeceba4c
4•sandwichsphinx•1h ago•0 comments

Building Multi-Agent Systems (Part 2)

https://blog.sshh.io/p/building-multi-agent-systems-part
2•sshh12•1h ago•0 comments

Solving Wordle with uv's dependency resolver

https://mildbyte.xyz/blog/solving-wordle-with-uv-dependency-resolver/
3•mildbyte•1h ago•0 comments

How the Biosphere 2 experiment changed our understanding of the Earth

https://www.bbc.com/future/article/20250703-how-the-biosphere-2-experiment-changed-our-understanding-of-the-earth
4•breve•1h ago•0 comments

Think slow, think fast (2016)

http://datagenetics.com/blog/december32016/index.html
2•josephcsible•1h ago•0 comments

Wikipedia Sandbox

https://en.wikipedia.org/wiki/Wikipedia:Sandbox
3•esadek•1h ago•0 comments

The Dalai Lama says he hopes to live more than 130 years

https://apnews.com/article/india-dalai-lama-buddhism-birthday-130-98ea3c4c4db8454ea56a7a123ac425ec
2•geox•1h ago•1 comments
Open in hackernews

CubeCL: GPU Kernels in Rust for CUDA, ROCm, and WGPU

https://github.com/tracel-ai/cubecl
210•ashvardanian•2mo ago

Comments

zekrioca•2mo ago
Very interesting project! I am wondering how it compare against OpenCL, which I think adopts the same fundamental idea (write once, run everywhere)? Is it about CUbeCL's internal optimization for Rust that happens at compile time?
nathanielsimard•2mo ago
A lot of things happen at compile time, but you can execute arbitrary code in your kernel that executes at compile time, similar to generics, but with more flexibility. It's very natural to branch on a comptime config to select an algorithm.
fc417fc802•2mo ago
This appears to be single source which would make it similar to SYCL.

Given that it can target WGPU I'm really wondering why OpenCL isn't included as a backend. One of my biggest complaints about GPGPU stuff is that so many of the solutions are GPU only, and often only target the vendor compute APIs (CUDA, ROCm) which have much narrower ecosystem support (versus an older core vulkan profile for example).

It's desirable to be able to target CPU for compatibility, debugging, and also because it can be nice to have a single solution for parallelizing all your data heavy work. The latter reduces mental overhead and permits more code reuse.

zekrioca•2mo ago
Makes sense. And indeed, having OpenCL as a backend would be a very interesting extension.
ttoinou•2mo ago
Who would use the OpenCL backend rather than the others targets provided ?
wingertge•2mo ago
There's infrastructure in the SPIR-V compiler to be able to target both OpenCL and Vulkan, but we don't currently use it because OpenCL would require a new runtime, while Vulkan can simply use the existing wgpu runtime and pass raw SPIR-V shaders.

One thing I've never investigated is how performance OpenCL actually is for CPU. Do you happen to have any resources comparing it to a more native CPU implementation?

fc417fc802•2mo ago
Sorry my interest there is debugging and I'm not immediately coming across good benchmarks. PoCL [0] seems to have added a TBB backend [1] so I'd expect it to be reasonable (otherwise why bother) but I haven't tested it.

It isn't really related to your question but I think the FluidX3D benchmarks [2] illustrate that OpenCL is at least viable across a wide variety of hardware.

As far as targeting CPUs in a release build it's not a particular backend that's important to me. The issue is at the source code level. Having single source is nice but you're still stuck with these two very different approaches. It means that the code is still clearly segmented and thus retargeting any given task (at least nontrivial ones) involves rewriting it to at least some extent.

Contrast that with a model like OpenMP where the difference between CPU and GPU is marking the relevant segment for offload. Granted that you'll often need to change algorithms when switching to achieve reasonable performance but it's still a really nice quality of life feature not to have to juggle more paradigms and libraries.

[0] https://github.com/pocl/pocl

[1] https://portablecl.org/docs/html/drivers.html

[2] https://github.com/ProjectPhysX/FluidX3D

LegNeato•2mo ago
See also this overview for how it compares to other projects in the Rust and GPU ecosystem: https://rust-gpu.github.io/ecosystem/
qskousen•2mo ago
Surprised this doesn't mention candle: https://github.com/huggingface/candle
the__alchemist•2mo ago
I don't think that fits; that's a ML framework. The others in the link are general GPU frameworks.
the__alchemist•2mo ago
Love it. I've been using cudarc lately; would love to try this since it looks like it can share data structures between host and device (?). I infer that this is a higher-level abstraction.
adastra22•2mo ago
Where is the Metal love…
syl20bnr•2mo ago
It also compiles directly to MSL, it is just missing from the post title.
adastra22•2mo ago
No it compiles indirectly through wgpu, which means it doesn’t have access to any Metal extensions not exposed by the wgpu interface.
syl20bnr•2mo ago
I am the coder of the MSL dialect for the CubeCL CPP compiler. Since 0.5 release it directly compiles to MSL and support simdgroup matrix functions for instance. It does use wgpu for the runtime but without naga as we added msl pass through to wgpu just for this.
adastra22•2mo ago
You should update the README.
syl20bnr•2mo ago
You are right we just released Burn and updated its readme, we were not thinking that CubeCL would be the one that could be featured. ^_^
grovesNL•2mo ago
wgpu has some options to access backend-specific types and shader passthrough (i.e., you provide your own shader for a backend directly).

Generally wgpu is open to supporting any Metal extensions you need. There's usually an analogous extension in one of the other backends (e.g., Vulkan, DX12) anyway.

moffkalast•2mo ago
From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the Blessed Machine.
Almondsetat•2mo ago
Why would anyone love something born out of pure spite for industry standards?
pjmlp•2mo ago
For the same reason CUDA and ROCm are supported.
miohtama•2mo ago
Apple is known to be not that great contributor to open source, unlike Nvidia, AMD, Intel.
pjmlp•2mo ago
You should check Linus opinion on those.

Also, to whom do you have to thank LLVM exists in first place, and has not fizzled out as yet another university compiler research project?

m-schuetz•2mo ago
To be fair, the industry standards all suck except for CUDA.
gitroom•2mo ago
Gotta say, the constant dance between all these GPU frameworks kinda wears me out sometimes - always chasing that better build, you know?
nathanielsimard•2mo ago
The need to build CubeCL came from the Burn deep learning framework (https://github.com/tracel-ai/burn), where we want to easily build algorithms like in CUDA with a real programming language, while also being able to integrate those algorithms inside a compiler at runtime to fuse dynamic graphs.

Since we don't want to rewrite everything multiple times, it also has to be multi-platform and optimal, so the feature set must be per-device, not per-language. I'm not aware of a tool that does that, especially in Rust (which Burn is written in).

fc417fc802•2mo ago
> I'm not aware of a tool that does that

Jax? But then you're stuck in python. SYCL?

But yeah not for Rust. This project is filling a prominent hole IMO.

rowanG077•2mo ago
Futhark immediately came to mind. It's designed to be able to be trivially integrated into a package.
kookamamie•2mo ago
This reminds me of Halide (https://halide-lang.org/).

In Halide, the concept was great, yet the problems in kernel development were moved to the side of "scheduling", i.e. determining tiling/vectorization/parallellization for the kernel runs.

rfoo•2mo ago
I'd recommend having a "gemm with a twist" [0] example in the README.md instead of having an element-wise example. It's pretty hard to evaluate how helpful this is for AI otherwise.

[0] For example, gemm but the lhs is in fp8 e4m3 and rhs is in bf16 and we want fp32 accumulation, output to bf16 after applying GELU.

ashvardanian•2mo ago
Agreed! I was looking through the summation example < https://github.com/tracel-ai/cubecl/blob/main/examples/sum_t...> and it seems like the primary focus is on the more traditional pre-2018 GPU programming without explicit warp-level operations, asynchrony, atomics, barriers, or countless tensor-core operations.

The project feels very nice and it would be great to have more notes in the README on the excluded functionality to better scope its applicability in more advanced GPGPU scenarios.

0x7cfe•2mo ago
CubeCL is the computation backend for Burn (https://burn.dev/) - ML framework done by the same team which does all the tensor magic like autodiff, op fusion and dynamic graphs.
nathanielsimard•2mo ago
We support warp operations, barriers for Cuda, atomics for most backends, tensor cores instructions as well. It's just not well documented on the readme!
ashvardanian•2mo ago
Amazing! Would love to try them! If possible, would also ask for a table translating between CubeCL and CUDA terminology. It seems like CUDA Warps are called Planes in CubeCL, and it’s probably not the only difference.
nathanielsimard•2mo ago
One of the main author here, the readme isn't really well up-to-date. We have our own gemm implementation based on CubeCL. It's still moving a lot, but we support tensor cores, use warp operations (Plane Operations in CubeCL), we even added TMA instructions for CUDA.
wingertge•2mo ago
We don't yet support newer types like fp8 and fp4, that's actually my next project. I'm the only contributor with the hardware to actually use the new types, so it's a bit bottlenecked on a single person right now. But yes, the example is rather simplistic, should probably work on that some time once I'm done updating the feature set to Blackwell.
lostmsu•2mo ago
Isn't there a CPU-based "emulator" in Nvidia dev tools?
wingertge•2mo ago
From what I can tell it's not accurate enough to catch a lot of errors in the real world. Maybe an illegal instruction, but not a race condition from a missing sync or a warp divergence on a uniform instruction or other potential issues like that.
bionhoward•2mo ago
Praying to the kernel gods for some Rust FP8 training
DarkmSparks•2mo ago
wow, what's the downsides to this? It feels like it could be one of the biggest leaps in programming in a long time, does it keep rusts safety aspects? How does it compare with say openCL?
nathanielsimard•2mo ago
We have safe and unsafe version for launching kernels where we can ensure that a kernel won't corrupt data elsewhere (and therefore won't create memory error or segfaults). But within a kernel ressources are mutable and shared between GPU cores, since that's how GPUs work.