The tricks are more around optimizing for the hardware capabilities/constraints. For instance:
- conv2d is faster than linear (see Apple's post [0]) so you rewrite the model for that (example from the repo [1])
- inputs/outputs are static shapes, so KV cache requires some creativity (I wrote about that here [2])
- compute is float16 (not bfloat16) so occasionally you have to avoid activation overflows
[0]: https://machinelearning.apple.com/research/neural-engine-tra...
[1]: https://github.com/Anemll/Anemll/blob/4bfa0b08183a437e759798...
[2]: https://stephenpanaro.com/blog/kv-cache-for-neural-engine
Edit: I changed llama.cpp to whisper.cpp - I didn’t realize that llama.cpp doesn’t have a coreml option like whisper.cpp does.
Currently, it is for example used through the "Vision Framework" eg for OCR tasks (for instance, when previewing an image in macOS, it performs OCR in the background using the ANE). Additionally, they are utilized by certain apple intelligence features that are executed locally (eg when I asked writing tools to rewrite this comment, I saw a spike in ANE usage).
They can also be used for diffusion image models (through core ml, diffusers has a nice frontend for that) but my understanding is that they are primarily for "light" ML tasks within an application rather than running larger models (though that's also possible, but they are gonna probably run slower than in gpu).
They claim their ANE-optimized models achieve "up to 10 times faster and 14 times lower peak memory consumption compared to baseline implementations."
AFAIK, neither MLX nor llama.cpp support ANE. Though llama.cpp started exploring this idea [0].
What's weird is that MLX is made by Apple and yet, they can't support ANE given its closed-source API! [1]
[0]: https://github.com/ggml-org/llama.cpp/issues/10453
[1]: https://github.com/ml-explore/mlx/issues/18#issuecomment-184...
More extensive information at https://github.com/tinygrad/tinygrad/tree/master/extra/accel... (from the Tinygrad folks, note that this is also similarly outdated) seems to basically confirm the above.
(The jury is still out for M3/M4 which currently have no Asahi support - thus, no current prospects for driving the ANE bare-metal. Note however that the M3/Pro/Max ANE reported performance numbers are quite close to the M2 version, so there may not be a real improvement there either. M3 Ultra and especially the M4 series may be a different story.)
I would say though that this likely excludes them from being useful for training purposes.
At that point, the ANE loses because you have to split the model into chunks and only one fits at a time.
Chunking is actually beneficial as long as all the chunks can fit into the ANE’s cache. It speeds up compilation for large network graphs and cached loads are negligible cost. On M1 the cache limit is 3-4GB, but it is higher on M2+.
I had also assumed that loading a chunk from the cache was not free because I’ve seen cache eviction on my M1, but it’s good to know that it’s no longer as big of a limitation.
also, I’m a big fan of your work! I played around with your ModernBERT CoreML port a bit ago
Maybe cache is the wrong word. This is a limit to how much can be mmap'd for the ANE at once. It's not too hard to hit on M1 if your model is in the GB range. Chunking the model into smaller pieces makes it more likely to "fit", but if it doesn't fit you have to unmap/remap in each forward pass which will be noticeable.
Awesome to hear about ModernBERT! Big fan of your work as well :)
To answer the question though - I think this would be used for cases where you are building an app that wants to utilize a small AI model while at the same time having the GPU free to do graphics related things, which I'm guessing is why Apple stuck these into their hardware in the first place.
Here is an interesting comparison between the two from a whisper.cpp thread - ignoring startup times - the CPU+ANE seems about on par with CPU+GPU: https://github.com/ggml-org/whisper.cpp/pull/566#issuecommen...
Yes, hammering the GPU too hard can affect the display server, but no, switching to the CPU is not a good alternative
Which is still painfully slow. CoreML is not a real ML platform.
blog: https://machinelearning.apple.com/research/vision-transforme...
It took multiple tries to get the model to convert at all to the mlpackage format, and then a lot of experimenting to get it to run on the ANE instead of the GPU, only to discover that constant reshaping was killing any performance benefit (either you have a fixed multiplication size or don't bother), and even at a fixed size and using the attention mask, its operations were slower than saturating the GPU with large batches.
I discovered an issue where using the newer iOS 18 standard would cause the model conversion to break, and put an issue in on their GitHub, including an example repository for easy replication. I got a response quickly, but almost a year later, the bug is still unfixed.
Even when George Hotz attempted to hack it to use it without Apple's really bad and unmaintained CoreML library, he gave up because it was impossible without breaking some pretty core OS features (certificate signing IIRC).
The ANE/CoreML is just not serious at all about making their hardware usable at all. Even Apple's internal MLX team can't crack that nut.
I wrote about it here[0] but the gist is you can have a fixed size cache and slide it in chunks with each inference. Not as efficient as a cache that grows by one each time of course.
[0]: https://stephenpanaro.com/blog/inside-apples-2023-transforme...
I pre/ordered the Snapdragon X Dev kit from Qualcomm - but they ended up delivering a few units -- to only cancel the whole program. The whole thing turned out to be a hot-mess express saga. THAT computer was going to be my Debian rig.
$2,000 vs. $3,500 isn't well under half either.
Prompt: "Tell me a long story about the origins of 42 being the answer."
anemll: 9.3 tok/sec, ~500MB of memory used.
mlx 8bit: 31.33 tok/sec, ~8.5GB of memory used.
mlx bf16: 27.17 tok/sec, ~15.7GB of memory used.
Memory results are from activity monitor across any potentially involved processes, but I feel like I might missing something here...
[0] https://huggingface.co/anemll/anemll-DeepSeekR1-8B-ctx1024_0...
[1] https://huggingface.co/mlx-community/DeepSeek-R1-Distill-Lla...
[2] https://huggingface.co/mlx-community/DeepSeek-R1-Distill-Lla...
mlx is much faster, but anemll appeared to use only 500MB of memory compared to the 8GB mlx used.
See my other comments. anemll appears to use less memory.
[0] https://huggingface.co/anemll/anemll-llama-3.2-1B-iOSv2.0
To some degree, that's an unavoidable consequence of how long it takes to design and ship specialized hardware with a supporting software stack. By contrast, ML research is moving way faster because they hardly ever ship anything product-like; it's a good day when the installation instructions for some ML thing only includes three steps that amount to "download more Python packages".
And the lack of cross-vendor standardization for APIs and model formats is also at least partly a consequence of various NPUs evolving from very different starting points and original use cases. For example, Intel's NPUs are derived from Movidius, so they were originally designed for computer vision, and it's not at all a surprise that making them do LLMs might be an uphill battle. AMD's NPU comes from Xilinx IP, so their software mess is entirely expected. Apple and Qualcomm NPUs presumably are still designed primarily to serve smartphone use cases, which didn't include LLMs until after today's chips were designed.
It'll be very interesting to see how this space matures over the next several years, and whether the niche of specialized low-power NPUs survives in PCs or if NVIDIA's approach of only using the GPU wins out. A lot of that depends on whether anybody comes up with a true killer app for local on-device AI.
GPU's are gaining their own kinds of specialized blocks such as matrix/tensor compute units, or BVH acceleration for ray-tracing (that may or may not turn out to be useful for other stuff). So I'm not sure that there's any real distinction from that POV - a specialized low-power unit in an iGPU is going to be practically indistinguishable from a NPU, except that it will probably be easier to target from existing GPU API's.
Possibly, depending on how low the power actually is. We can't really tell from NVIDIA's tensor cores, because waking up an NVIDIA discrete GPU at all has a higher power cost than running an NPU. Intel's iGPUs have matrix units, but I'm not sure if they can match their NPU on power or performance.
htk•14h ago
bigyabai•14h ago
sroussey•10h ago
lucasoshiro•14h ago
mr_toad•12h ago
sroussey•11h ago
There was a guy using it for live video transformations and it almost caused the phones to “melt”. [2]
[1] https://machinelearning.apple.com/research/neural-engine-tra...
[2] https://x.com/mattmireles/status/1916874296460456089
ks2048•13h ago
https://discuss.pytorch.org/t/apple-neural-engine-ane-instea...
It seems intuitive that if they design hardware very specifically for these applications (beyond just fast matmuls on a GPU), they could squeeze out more performance.
astrange•10h ago
It's about performance/power ratios.
1W6MIC49CYX9GAP•13h ago
GPU + dedicated AI HW is virtually always the wrong approach compared to GPU+ tensor cores
xiphias2•13h ago
For laptops, 2x GPU cores would make more sense, for phones/tablets, energy efficiency is everything.
brigade•12h ago
[1] https://vengineer.hatenablog.com/entry/2024/10/13/080000
Archit3ch•10h ago
[1] https://arxiv.org/abs/2203.03341
brigade•10h ago
But the point was about area efficiency
rz2k•11h ago
It looked like even ANEMLL provides limited low level access to specifically direct processing toward the Apple Neural Engine, because Core ML still acts as the orchestrator. Instead, flags during conversion of a PyTorch or TensorFlow model can specify ANE-optimized operations, quantization, and parameters hinting at compute targets or optimization strategies. For example `MLModelConfiguration.computeUnits = .cpuAndNeuralEngine` during conversion would disfavor the GPU cores.
Anyway, I didn't actually experiment with this, but at the time I thought maybe there could be a strategy of creating a speculative execution framework, with a small ANE-compatible model to act as the draft model paired with a larger target model running on GPU cores. The idea being that the ANE's low latency and high efficiency could accelerate results.
However, I would be interested to hear the perspective of people who actually know something about the subject.