frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

NIST ion clock sets new record for most accurate clock

https://www.nist.gov/news-events/news/2025/07/nist-ion-clock-sets-new-record-most-accurate-clock-world
223•voxadam•6h ago•81 comments

Show HN: Shoggoth Mini – A soft tentacle robot powered by GPT-4o and RL

https://www.matthieulc.com/posts/shoggoth-mini
294•cataPhil•7h ago•58 comments

To be a better programmer, write little proofs in your head

https://the-nerve-blog.ghost.io/to-be-a-better-programmer-write-little-proofs-in-your-head/
177•mprast•5h ago•83 comments

Encrypting files with passkeys and age

https://words.filippo.io/passkey-encryption/
24•thadt•1d ago•33 comments

Hierarchical Modeling (H-Nets)

https://cartesia.ai/blog/hierarchical-modeling
45•marviel•3h ago•12 comments

The Story of Mel, A Real Programmer, Annotated (1996)

https://users.cs.utah.edu/~elb/folklore/mel-annotated/node1.html#SECTION00010000000000000000
28•fanf2•3d ago•5 comments

The FIPS 140-3 Go Cryptographic Module

https://go.dev/blog/fips140
25•FiloSottile•2h ago•6 comments

Show HN: Beyond Z²+C, Plot Any Fractal

https://www.juliascope.com/
55•akunzler•4h ago•13 comments

Designing for the Eye: Optical Corrections in Architecture and Typography

https://www.nubero.ch/blog/015/
76•ArmageddonIt•5h ago•12 comments

Helix Editor 25.07

https://helix-editor.com/news/release-25-07-highlights/
221•matrixhelix•3h ago•89 comments

Reflections on OpenAI

https://calv.info/openai-reflections
279•calvinfo•6h ago•157 comments

Underwriting Superintelligence

https://underwriting-superintelligence.com/
32•brdd•3h ago•24 comments

Hazel: A live functional programming environment with typed holes

https://github.com/hazelgrove/hazel
25•azhenley•3h ago•5 comments

Human Stigmergy: The world is my task list

https://aethermug.com/posts/human-stigmergy
31•Petiver•3h ago•10 comments

How Culture Is Made

https://www.metalabel.com/studio/release-strategies/how-culture-is-made
11•surprisetalk•3d ago•2 comments

Lorem Gibson

http://loremgibson.com/
81•DyslexicAtheist•2d ago•13 comments

Petabit-class transmission over > 1000 km using standard 19-core optical fiber

https://www.nict.go.jp/en/press/2025/05/29-1.html
67•the_arun•2d ago•29 comments

CoinTracker (YC W18) is hiring to solve crypto taxes and accounting (remote)

1•chanfest22•5h ago

Voxtral – Frontier open source speech understanding models

https://mistral.ai/news/voxtral
32•meetpateltech•8h ago•11 comments

LLM Inevitabilism

https://tomrenner.com/posts/llm-inevitabilism/
1460•SwoopsFromAbove•18h ago•1376 comments

What caused the 'baby boom'? What would it take to have another?

https://www.derekthompson.org/p/what-caused-the-baby-boom-what-would
42•mmcclure•6h ago•226 comments

Blender 4.5 LTS Released

https://www.blender.org/download/releases/4-5/
251•obdev•7h ago•76 comments

Claude for Financial Services

https://www.anthropic.com/news/claude-for-financial-services
13•mildlyhostileux•48m ago•7 comments

Most (ly Dead) Influential Programming Languages (2020)

https://www.hillelwayne.com/post/influential-dead-languages/
60•azhenley•3d ago•35 comments

Where's Firefox Going Next?

https://connect.mozilla.org/t5/discussions/where-s-firefox-going-next-you-tell-us/m-p/100698#M39094
31•ReadCarlBarks•1h ago•22 comments

Show HN: We made our own inference engine for Apple Silicon

https://github.com/trymirai/uzu
135•darkolorin•11h ago•41 comments

Literalism plaguing today’s movies

https://www.newyorker.com/culture/critics-notebook/the-new-literalism-plaguing-todays-biggest-movies
199•frogulis•18h ago•362 comments

KDE's official Roku/Android TV alternative is back from the dead

https://www.neowin.net/news/kdes-android-tv-alternative-plasma-bigscreen-rises-from-the-dead-with-a-better-ui/
113•bundie•5h ago•30 comments

A quick look at unprivileged sandboxing

https://www.uninformativ.de/blog/postings/2025-07-13/0/POSTING-en.html
37•zdw•2d ago•13 comments

Cloudflare starts blocking pirate sites for UK users

https://torrentfreak.com/cloudflare-starts-blocking-pirate-sites-for-uk-users-thats-a-pretty-big-deal-250715/
191•gloxkiqcza•8h ago•207 comments
Open in hackernews

Show HN: We made our own inference engine for Apple Silicon

https://github.com/trymirai/uzu
135•darkolorin•11h ago
We wrote our inference engine on Rust, it is faster than llama cpp in all of the use cases. Your feedback is very welcomed. Written from scratch with idea that you can add support of any kernel and platform.

Comments

sharifulin•8h ago
Wow! Sounds super interesting
slavasmirnov•8h ago
that’s exactly we are looking for not to waste on apis. Wonder how significant trade offs are
TheMagicHorsey•8h ago
Amazing!

How was your experience using Rust on this project? I'm considering a project in an adjacent space and I'm trying to decide between Rust, C, and Zig. Rust seems a bit burdensome with its complexity compared to C and Zig. Reminds me of C++ in its complexity (although not as bad). I find it difficult to walk through and understand a complicated Rust repository. I don't have that problem with C and Zig for the most part.

But I'm wondering if I just need to invest more time in Rust. How was your learning curve with the language?

adastra22•7h ago
You are confusing familiarity with intrinsic complexity. I have 20 years experience with C/C++ before switching to rust a few years ago. After the initial hurdle, it is way easier and very simple to follow.
ednevsky•7h ago
nice
ewuhic•7h ago
>faster than llama cpp in all of the use cases

What's your deliberate, well-thought roadmap for achieving adoption similar to llama cpp?

pants2•7h ago
Probably getting acquired by Apple :)
khurs•24m ago
Ollama is the leader isn't it?

Brew stats (downloads last 30 days)

Ollama - 28,232 Lama.cpp - 7,826

mintflow•7h ago
just curios, will it be supported on iOS, it would be great to build local llm app with this project.
AlekseiSavin•7h ago
already) https://github.com/trymirai/uzu-swift
cwlcwlcwlingg•7h ago
Wondering why use Rust other than C++
adastra22•7h ago
Why use C++?
khurs•21m ago
So C++ users don't need to learn something new.
bee_rider•6h ago
I wonder why they didn’t use Fortran.
giancarlostoro•4h ago
...or D? or Go? or Java? C#? Zig? etc they chose what they were most comfortable with. Rust is fine, it's not for everyone clearly, but those who use it produce high quality software, I would argue similar with Go, without all the unnecessary mental overhead of C or C++
outworlder•4h ago
Why use C++ for greenfield projects?
khurs•24m ago
The recommendation from the security agencies is to prefer Rust over C++ as less risk of exploits.

Checked and Lama.cpp used C++ (obviously) and Llama uses Go.

greggh•7h ago
"trymirai", every time I hear the word Mirai I think of the large IOT DDoS botnet. Maybe it's just me though.
fnord77•4h ago
I think of the goofy Toyota fuel cell car. I think a grand total of about 6 have been sold (leased) in california
rnxrx•7h ago
I'm curious about why the performance gains mentioned were so substantial for Qwen vs Llama?
AlekseiSavin•6h ago
it looks like llama.cpp has some performance issues with bf16
homarp•7h ago
Can you explain the type of quantization you support?

would https://docs.unsloth.ai/basics/kimi-k2-how-to-run-locally be faster with mirai?

AlekseiSavin•6h ago
right now, we support AWQ but are currently working on various quantization methods in https://github.com/trymirai/lalamo
smpanaro•7h ago
In practice, how often do the models use the ANE? It sounds like you are optimizing for speed which in my experience always favors GPU.
AlekseiSavin•6h ago
You're right, modern edge devices are powerful enough to run small models, so the real bottleneck for a forward pass is usually memory bandwidth, which defines the upper theoretical limit for inference speed. Right now, we've figured out how to run computations in a granular way on specific processing units, but we expect the real benefits to come later when we add support for VLMs and advanced speculative decoding, where you process more than one token at a time
J_Shelby_J•6h ago
VLMs = very large models?
mmorse1217•6h ago
Probably vision language models.
skybrian•6h ago
What are the units on the benchmark results? I’m guessing higher is better?
AlekseiSavin•6h ago
yeah, tokens per second
dcreater•6h ago
Somewhat faster on small models. Requires new format.

Not sure what the goal is for this project? Not seeing how this presents adequate benefits to get adopted by the community

koakuma-chan•6h ago
Written in Rust is a big one for me.
worldsavior•5h ago
It's utilizing Apple ANE and probably other optimization tools provided by Apple's framework. Not sure if llama.cpp uses them, but if they're not then the benchmark on GitHub says it all.
zdw•6h ago
How does this bench compared to MLX?
jasonjmcghee•6h ago
I use MLX in lmstudio and it doesn't have whatever issues llama cpp is showing here.

Qwen3-0.6B at 5 t/s doesn't make any sense. Something is clearly wrong for that specific model.

giancarlostoro•4h ago
Hoping the author can answer, I'm still learning about how this all works. My understanding is that inference is "using the model" so to speak. How is this faster than established inference engines specifically on Mac? Are models generic enough that if you build e.g. an inference engine focused on AMD GPUs or even Intel GPUs, would they achieve reasonable performance? I always assumed because Nvidia is king of AI that you had to suck it up, or is it just that most inference engines being used are married to Nvidia?

I would love to understand how universal these models can become.

darkolorin•1h ago
Basically “faster” means better performance e.g. tokens/s without loosing quality (benchmarks scores for models). So when we say faster we provide more tokens per second than llama cpp. That means we effectively utilize hardware API available (for example we wrote our own kernels) to perform better.
nodesocket•3h ago
I just spun up a AWS EC2 g6.xlarge instance to do some llm work. The GPU is NVIDIA L4 24GB and costs $0.8048/per hour. Starting to think about switching to an Apple mac2-m2.metal instance for $0.878/ per hour. Big question is the Mac instance only has 24GB of unified memory.
khurs•19m ago
Unified memory doesn't compare to a Nvidia GPU, the latter is much better.

Just depends on what performance level you need.

floam•3h ago
How does this compare to https://github.com/Anemll/Anemll?
zackangelo•1h ago
We also wrote our inference engine in rust for mixlayer, happy to answer any questions from those trying to do the same.

Looks like this uses ndarray and mpsgraph (which I did not know about!), we opted to use candle instead.

khurs•23m ago
Have you added it to HomeBrew and other package managers yet?

Also any app deployed to PROD but developed on Mac need to be consistent i.e. work on Linux/in container.