GNU Octave has been the main open-source way to run MATLAB code but it only supports a subset of the grammar and semantics, and its performance is far behind modern expectations. It’s more of a compatibility bridge than a true runtime alternative.
I decided to implement a MATLAB language compiler and executor from scratch, grammar and semantics complete, in Rust with a modern architecture inspired by V8. Like V8, RunMat starts in a lightweight interpreter, then profiles and JITs hot paths using Cranelift. Snapshotting makes cold start essentially vanish (5ms vs 2–10s in MATLAB), and tensor operations run natively across CPUs or GPUs (CUDA, Metal, Vulkan) without an extra license.
Performance (vs Octave, Apple M2 Max, 32GB): * Startup: 0.9147s → 0.005s (180× faster) * Matrix ops: 0.8220s → 0.005s (164×) * Math funcs: 0.8677s → 0.0053s (160×) * Control flow: 0.8757s → 0.0057s (154×)
Unlike Octave, RunMat implements the full MATLAB grammar: arrays/indexing (end/colon/masks/N-D), multiple returns, varargin/varargout, classdef OOP, events/handles, metaclass, and standardized exceptions. The core is slim at 5MB static binaries for Linux, macOS, and Windows (or embedded devices and containers), with language extensibility coming from packages that can be written in MATLAB or in Rust.
It’s 100% open source: The repo is about 3M characters of code, has over 1,000 tests covering the edges of the semantic surface, and was bootstrapped in three weeks. A package manager is next, along with a final draft of the builtin standard library.
TLDR: Same language semantics, but rebuilt with modern methods, the way Chrome’s V8 redefined JavaScript engines.
godbolev•1h ago
If my company makes hardware, what's something I can do with RunMat which I couldn't easily do with Octave? (Assuming I don't want to use MATLAB)
nallana•52m ago
- Lots of language semantics are unsupported in Octave (like Classes), and it’s purely a slow line by line interpreter so it’s very slow.
- Given the design / Cranelift IR translation here, RunMat can run natively on any compile target for which Cranelift [currently x86-64, aarch64 (ARM64), s390x (IBM Z), and riscv64], and targeting additional ISAs is easy. The net of this: you can write controls / logic in Matlab code and run it natively on device.
- GPU acceleration; the foundations are in place so RunMat can natively accelerate Tensor / Matrix math on GPUs, irrespective of device (eg CUDA / Metal / Vulcan / etc). Net of this is that you can do much bigger math calculations even faster (and without having to worry about moving things on/off GPU memory; there’s a configurable (and substitutable) planner built in that does scatter / gather intelligently for you). The AST is extensively typed, and we can support things like reverse-grade autograd by default -> your math runs even faster than would natively in Octave or even Matlab (I believe Matlab has a separate toolbox where you can do some of this, but it’s not natively how their matrix math works / they try and upsell for that).
- Once the package manager is complete, you can extend it. Octave doesn’t really have a package system per se.