frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
19•yi_wang•1h ago•4 comments

Show HN: Craftplan – Elixir-based micro-ERP for small-scale manufacturers

https://puemos.github.io/craftplan/
15•deofoo•4d ago•3 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
62•momciloo•9h ago•12 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
31•mbitsnbites•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
297•isitcontent•1d ago•39 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
364•eljojo•1d ago•218 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
44•sandGorgon•2d ago•21 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
374•vecti•1d ago•172 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
2•cui•3h ago•1 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
2•davidcondrey•4h ago•2 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
98•antves•2d ago•70 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
86•phreda4•1d ago•17 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•1h ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
2•latentio•6h ago•0 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
157•bsgeraci•1d ago•65 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
29•dchu17•1d ago•12 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
55•nwparker•2d ago•12 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
2•shubham-coder•8h ago•1 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
23•NathanFlurry•1d ago•11 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
18•denuoweb•2d ago•2 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
3•Keyframe•9h ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
3•xeouz•9h ago•1 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
173•vkazanov•2d ago•49 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
2•ivanglpz•11h ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
3•anipaleja•11h ago•0 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•1d ago•8 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
27•JoshPurtell•2d ago•5 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
3•sam256•13h ago•1 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
9•sakanakana00•14h ago•2 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•14h ago•1 comments
Open in hackernews

Show HN: RunMat – runtime with auto CPU/GPU routing for dense math

https://github.com/runmat-org/runmat
21•nallana•2mo ago
Hi, I’m Nabeel. In August I released RunMat as an open-source runtime for MATLAB code that was already much faster than GNU Octave on the workloads I tried. https://news.ycombinator.com/item?id=44972919

Since then, I’ve taken it further with RunMat Accelerate: the runtime now automatically fuses operations and routes work between CPU and GPU. You write MATLAB-style code, and RunMat runs your computation across CPUs and GPUs for speed. No CUDA, no kernel code.

Under the hood, it builds a graph of your array math, fuses long chains into a few kernels, keeps data on the GPU when that helps, and falls back to CPU JIT / BLAS for small cases.

On an Apple M2 Max (32 GB), here are some current benchmarks (median of several runs):

* 5M-path Monte Carlo * RunMat ≈ 0.61 s * PyTorch ≈ 1.70 s * NumPy ≈ 79.9 s → ~2.8× faster than PyTorch and ~130× faster than NumPy on this test.

* 64 × 4K image preprocessing pipeline (mean/std, normalize, gain/bias, gamma, MSE) * RunMat ≈ 0.68 s * PyTorch ≈ 1.20 s * NumPy ≈ 7.0 s → ~1.8× faster than PyTorch and ~10× faster than NumPy.

* 1B-point elementwise chain (sin / exp / cos / tanh mix) * RunMat ≈ 0.14 s * PyTorch ≈ 20.8 s * NumPy ≈ 11.9 s → ~140× faster than PyTorch and ~80× faster than NumPy.

If you want more detail on how the fusion and CPU/GPU routing work, I wrote up a longer post here: https://runmat.org/blog/runmat-accel-intro-blog

You can run the same benchmarks yourself from the GitHub repo in the main HN link. Feedback, bug reports, and “here’s where it breaks or is slow” examples are very welcome.

Comments

constantcrying•2mo ago
Writing a (somewhat?) Matlab compatible interpreter and runtime, which targets GPU and CPU simultaneously, is certainly impressive.

But, who is this for? Matlab users? Python users? Julia users? Do you have an aim with this project or is it just for fun?

salvesefu•2mo ago
From the Website: "If you write math in MATLAB and hit performance walls on CPU, RunMat is built for you."
nallana•2mo ago
Thanks!! It was originally for Octave users whose scripts were running painfully slow.

The goal was to keep the MATLAB frontend capture syntax, but run it fast.

When we dug into why people were still using Octave, it was because it let them focus on their math, and was easier for them to read - was especially important for people that aren’t programmers; eg scientists and engineers.

I suppose this is also why we write in higher level languages than assembly.

The goal of this project is now: let’s make the fastest runtime in the world to run math.

Turned out, the MATLAB syntax offers a large amount of compiler time hinting in (it is meant for math intent capture after all).

We’ve found as we built this that if we take a domain specific approach (eg we’re going to make every optimization for what’s best for people wanting to focus on the math part), we can outperform general purpose languages like Python by a large mile on the math part.

For example, internals like keeping tensor shapes + broadcasting intent within the AST, and having the computation graph available for profitable GPU/CPU threshold detection isn’t something that makes practical sense to build into a general purpose runtime like Python, but —

It lets RunMat speed up elementwise math orders of magnitude (eg 1B points going through 5-6 element wise ops like sin/cos/+/- etc are 80x faster on my MBP vs Python/PyTorch).

So Tl;dr — started as for Octave users. Goal is to build the fastest runtime for math for those that are looking to use computers to do math.

Obligatory disclosure because we’re engineers: you can still get faster by writing your own CUDA / GPU code. We’re betting 99% of the people that are trying to run math using computers don’t want to do that (ML community notwithstanding).

ardata•2mo ago
I've built trading bots that run monte carlo sims on historical data... numpy works but gets slow on large backtests, and pytorch feels like overkill when I just want fast array math without managing GPU memory. If this can drop in and handle the heavy lifting automatically i could see use for it