frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Reasoning Models Reason Well, Until They Don't

https://arxiv.org/abs/2510.22371
68•optimalsolver•1h ago•37 comments

AMD Could Enter ARM Market with Sound Wave APU Built on TSMC 3nm Process

https://www.guru3d.com/story/amd-enters-arm-market-with-sound-wave-apu-built-on-tsmc-3nm-process/
142•walterbell•7h ago•85 comments

Show HN: A fast, dependency-free traceroute implementation in pure C

https://github.com/davidesantangelo/fastrace
23•daviducolo•1h ago•13 comments

Affinity Studio now free

https://www.affinity.studio/get-affinity
1018•dagmx•19h ago•672 comments

Result is all I need

https://rockyj-blogs.web.app/2025/10/25/result-monad.html
32•rockyj•5d ago•17 comments

Rouille – Rust Programming, in French

https://github.com/bnjbvr/rouille
129•mihau•1w ago•56 comments

Kimi Linear: An Expressive, Efficient Attention Architecture

https://github.com/MoonshotAI/Kimi-Linear
150•blackcat201•10h ago•11 comments

Phone numbers for use in TV shows, films and creative works

https://www.acma.gov.au/phone-numbers-use-tv-shows-films-and-creative-works
197•nomilk•13h ago•90 comments

How the cochlea computes (2024)

https://www.dissonances.blog/p/the-ear-does-not-do-a-fourier-transform
433•izhak•17h ago•136 comments

Bertie the Brain

https://en.wikipedia.org/wiki/Bertie_the_Brain
9•breppp•1w ago•0 comments

987654321 / 123456789

https://www.johndcook.com/blog/2025/10/26/987654321/
575•ColinWright•4d ago•96 comments

Show HN: Quibbler – A critic for your coding agent that learns what you want

https://github.com/fulcrumresearch/quibbler
62•etherio•10h ago•15 comments

Free software scares normal people

https://danieldelaney.net/normal/
698•cryptophreak•19h ago•446 comments

A Closer Look at Piezoelectric Crystal

https://www.samaterials.com/content/a-closer-look-at-stressed-piezo-crystals.html
17•pillars•1w ago•6 comments

Springs and bounces in native CSS

https://www.joshwcomeau.com/animation/linear-timing-function/
201•feross•2d ago•30 comments

A Classic Graphic Reveals Nature's Most Efficient Traveler

https://www.scientificamerican.com/article/a-human-on-a-bicycle-is-among-the-most-efficient-forms...
17•ako•1w ago•11 comments

John Carmack on mutable variables

https://twitter.com/id_aa_carmack/status/1983593511703474196
133•azhenley•8h ago•163 comments

NPM flooded with malicious packages downloaded more than 86k times

https://arstechnica.com/security/2025/10/npm-flooded-with-malicious-packages-downloaded-more-than...
271•jnord•1d ago•196 comments

Florian Schneider Collection: Instruments and equipment up for auction

https://www.juliensauctions.com/en/articles/the-florian-schneider-collection-rare-instruments-and...
33•cainxinth•3d ago•9 comments

Minecraft HDL, an HDL for Redstone

https://github.com/itsfrank/MinecraftHDL
176•sleepingreset•15h ago•26 comments

Exceptional Measurement of Chirality

https://www.rsc.org/news/2019/july/exceptional-measurement-of-chirality
24•bryanrasmussen•6d ago•4 comments

Jack Kerouac, Malcolm Cowley, and the difficult birth of On the Road

https://theamericanscholar.org/scrolling-through/
54•samclemens•2d ago•32 comments

Show HN: I made a heatmap diff viewer for code reviews

https://0github.com
225•lawrencechen•20h ago•62 comments

Roadmap for Improving the Type Checker

https://forums.swift.org/t/roadmap-for-improving-the-type-checker/82952
60•glhaynes•9h ago•17 comments

Claude Is Down

https://status.claude.com/incidents/s5f75jhwjs6g
28•stuartmemo•42m ago•12 comments

Modifying a radiation meter for (radioactive) rock collecting

https://maurycyz.com/projects/ludlum3/
40•8organicbits•6d ago•1 comments

Denmark reportedly withdraws Chat Control proposal following controversy

https://therecord.media/demark-reportedly-withdraws-chat-control-proposal
399•layer8•13h ago•132 comments

Lenses in Julia

https://juliaobjects.github.io/Accessors.jl/stable/lenses/
115•samuel2•4d ago•37 comments

Some rando turned me into a meme coin

https://cloudfour.com/thinks/that-time-some-rando-turned-me-into-a-meme-coin/
19•tbassetto•1h ago•2 comments

Show HN: Front End Fuzzy and Substring and Prefix Search

https://github.com/m31coding/fuzzy-search
37•kmschaal•2d ago•3 comments
Open in hackernews

AMD Could Enter ARM Market with Sound Wave APU Built on TSMC 3nm Process

https://www.guru3d.com/story/amd-enters-arm-market-with-sound-wave-apu-built-on-tsmc-3nm-process/
142•walterbell•7h ago

Comments

mgh2•6h ago
More speculation?
dwood_dev•5h ago
My guess from previous reporting on this, it was an experiment that might not ever be released.

ARM isn't nearly as interesting given the strides both Intel and AMD have made with low power cores.

Any scenario where SoundWave makes sense, using Zen-LP cores align better for AMD.

spockz•4h ago
It is interesting for AMD because having a on-par ARM chip means they can keep selling chips when the rest of the market switch to ARM. This is largely driven by Apple and by the cloud providers wanting more efficient higher density chips.

Apple isn’t going to switch back to AMD64 any time soon. Cloud providers will switch faster if X64 chips become really competitive again.

codedokode•4h ago
I am not sure if cloud providers want ARM - the most valuable resource is rack space, so you want to use the most powerful CPU, not the one using less energy.
arjie•4h ago
Well, Amazon does offer Graviton 4 (quite fast and useful stuff) along side their Epyc machines so there is some utility to them. A 9654 is much faster than a Graviton 4.

EDIT: Haha, I was going off our workloads but hilariously there are some HPC-like workloads where benchmarks show the Graviton 4 smoking a 9654 https://www.phoronix.com/review/graviton4-96-core/4

I suppose ours must have been more like the rest of the benchmarks (which show the 9654 faster than the Epyc).

Someone•3h ago
Cooling takes up rack space, too. There also are workloads that aren’t CPU constrained, but GPU or I/O constrained. On such systems, it’s better to spend your heat budget on other things than CPUs.
pxeger1•2h ago
> the most valuable resource is rack space

I've always heard it's cooling capacity. I'm also pretty confident that's true

friendzis•2h ago
> the most valuable resource is rack space

The limit is power capacity and quite often thermal. Newer DCs might be designed with larger thermal envelopes, however rack space is nearly meaningless once you exhaust thermal capacity of the rack/isle.

Performance within thermal envelope is a very important consideration in datacenters. If a new server offers double performance at double power it is a viable upgrade path only for DCs that have that power reserve in the first place.

imtringued•1h ago
Rack space limits include power limits. E.g. 10kw per rack.
dbdr•4h ago
> given the strides both Intel and AMD have made with low power cores.

Any pointers regarding that? How does the computing power to watts ratio look these days across major CPU architectures?

Someone•4h ago
The page this article got its info from (https://www.ithome.com/0/889/173.htm) says (according to Safari’s translation):

“IT Home News on October 13, @Olrak29_ found that the AMD processor code-named "Sound Wave" has appeared in the customs data list, confirming the company's processor development plan beyond the x86 architecture”

I think that means they are planning to export parts.

I think there still is some speculation involved as to what those parts are, and they might export them only for their own use, but is that likely?

LarsDu88•3h ago
cough gaming device
adrian_b•2h ago
AMD makes laptop CPUs with good performance per power consumption ratio, but they are designed for high power consumptions, typically for 28 W, or at least for 15 W.

AMD does not have any product that can compete with Intel's N-series or industrial Atom CPUs, which are designed for power consumptions of 6 W or of 10 W and AMD never had any Zen CPU for this power range.

If the rumors about this "Sound Wave" are true, then AMD will finally begin to compete again in this range of TDP, a market that they have abandoned many years ago (since the AMD Jaguar and Puma CPUs), because all their resources were focused on designing Zen CPUs for higher TDPs.

For cheap and low-power CPUs, the expensive x86-64 instruction decoder may matter, unlike for bigger CPUs, so choosing the Aarch64 ISA may be the right decision.

Zen compact cores provide the best energy efficiency for laptops and servers, especially for computation-intensive tasks, but they are not appropriate for cheap low-power devices whose computational throughput is less important than other features. Zen compact cores are big in comparison with ARM Cortex-X4, Intel Darkmont or Qualcomm cores and their higher performance is not important for cheap low-power devices.

wmf•5h ago
I don't see why Sound Wave would have any advantage, even efficiency, over a similar Zen 5/6 design. Microsoft must really want ARM if they're having this chip made.
DeepYogurt•5h ago
It could just be a play to make sure there's a second source to qualcomm
Findecanor•4h ago
The core count is relatively low though. 2P + 4E, whereas Snapdragon-X are 8 or 10 performance cores, indicating that this could be for a low-end tablet ... or game console?
DeathArrow•3h ago
They did countless attempts to use ARM but all failed. Consumers didn't care because they couldn't run their software. Microsoft won't solve the problem until they will provide a way to run all relevant software on ARM.
debugnik•3h ago
Microsoft already designed a modified ARM ABI [1] compatible with emulated X86-64 just for this transition. But it's a Windows 11 feature. I wonder if the refusal of many of us to switch from Windows 10 is part of the reason why they're still idling on an ARM strategy.

[1]: https://learn.microsoft.com/en-us/windows/arm/arm64ec-abi

p_l•1h ago
Part of the issue was incomplete amd64 emulation on windows which is why several MS products continued to ship 32bit - because while they might recompile their software for ARM, business users had binary-only extensions that they expected to continue using.
Zardoz84•2h ago
Apple did an excellent job doing the switch. I don't see why should fail here.
wongarsu•2h ago
A year or two ago I used a Windows 11 laptop with an ARM CPU, and at least for me everything just worked. The drivers weren't as good, but all my x86-64 software ran just fine
guiriduro•1h ago
Its pretty decent. Decent enough in fact that I can run a Windows 11 ARM install on vmware Fusion on my macbook m4 pro, and it will happily run win arm and x86 binaries (via builtin MS x86 emulation) decently fast and without complaint (we're talking apps, gaming I haven't tried.)
t312227•5h ago
hello,

imho. (!)

i think this would be great!!

personally i totally understood why AMD gave up on its last attempt - the A1100 opterons - about 10 years ago in favor of the back then new ryzen architecture:

* https://en.wikipedia.org/wiki/List_of_AMD_Opteron_processors...

but what i would really like to see: an ARM soc/apu on an "open"*) (!) hardware-platform similar to the existing amd64 pc hardware.

*) "open" as in: i'm able to boot whatever (vanilla) arm64 linux-distribution or other OS i want ...

i have to add: i'm personally offended by the amount of tinkering of the firmware/boot-process which is necessary to get for example the raspberry pi 5 (or 4) to boot vanilla debian/arm64 ... ;)

br, a..z

ps. even if its a bit o.T. in this context, as a reminder a link to a slightly older article about an interview with jim keller about how ISA no longer matters that much ...

"ARM or x86? ISA Doesn’t Matter"

* https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter

jabl•4h ago
> "ARM or x86? ISA Doesn’t Matter"

> * https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter

Some people, for some strange reason, want to endlessly relitigate the old 1980'ies RISC vs CISC flamewars. Jim Kellers interview above is a good antidote for that. Yes, RISC vs CISC matters for something like a simple in-order core you might see in embedded systems. For a big OoO core, much less so.

That doesn't mean you'd end up with x86 if you'd design a clean sheet 'best practices' ISA today. Probably it would indeed look something like aarch64 or RISC-V. So certainly in that sense RISC won. But the win isn't so overwhelming that it overcomes the value of the x86 software ecosystem in the markets where x86 plays.

consp•3h ago
You would also get rid of all the 8/16-bit shenanigans still somewhat present.
jabl•3h ago
Intel had a project doing that a few years ago, called X86S. It was killed after industry opposition.
stevefan1999•5h ago
Legendary Chip Architect, Jim Keller, Says AMD ‘Stupidly Cancelled’ K12 ARM CPU Project After He Left The Company: https://wccftech.com/legendary-chip-architect-jim-keller-say...

Could be a revival but for different purposes

high_na_euv•1h ago
Funny how some of his projects got cancelled like K12 at AMD or Royal Core at INTC and people always act like that was terrible decision, yet AMD is up like 100x on stock market and INTC... times gonna tell
guerrilla•1h ago
Cult of personality... or maybe people just want cool stuff for fun.
Keyframe•47m ago
is stock up because of them or despite them?
high_na_euv•43m ago
It is hard to evaluate it reliably.
alberth•4m ago
He also left AMD 10-years ago (2015).

https://en.wikipedia.org/wiki/Jim_Keller_(engineer)

Findecanor•4h ago
BTW. ChipsAndCheese has a recent article on MALL / Infinity Caches, evaluating it in the x86-based AMD Strix Halo APU:

https://chipsandcheese.com/p/evaluating-the-infinity-cache-i...

arjie•4h ago
Well, I'm eager to use it. For my home server I use an old power-hungry Epyc 7B13. It's overkill but it can run a lot of things (my blog, other software I use, my family's various pre-configured MCPs we use in Custom GPTs, rudimentary bioinformatics). The truth though is that I hate having to cross-compile from my M1 Mac to the x86_64 server. I would much rather just do an ARM to ARM platform cross-compile (way easier to do and much faster on the Orbstack container platform).

So I went out looking for an ARM-based server of equivalent strength to a Mac Mini that I could find and there's really not that much out there. There's the Qualcomm Snapdragon X Elite which is in only really one actual buyable thing (The Lenovo Ideacentre) and some vaporware Geekom or something product. But this thing doesn't have very good Linux support (it's built for ARM Windows apparently) and it's much costlier than some Apple Silicon running Asahi Linux.

So I'm eventually going to end up with some M1 Ultra Studio or an M4 Mini running Asahi Linux, which seems like such a complete inversion of the days when people would make Hackintoshes.

pengaru•3h ago
ampere?
arjie•1h ago
I looked into them but they didn't seem price/performance/watt competitive.
WorldPeas•4h ago
fingers crossed it'll eventually get a framework board
monegator•1h ago
I always wonder why nobody have never released a framework mainboard with rockchip. There is even one with a - very - slow RISC-V for OS developer FFS.
jesperwe•4h ago
Sounds like a PERFECT chip for my next HomeAssistant box :-D

- Low power when only idling through events from the radio networks

- Low power and reasonable performance when classifying objects in a few video feeds.

- Higher power and performance when occasionally doing STT/TTS and inference on a small local LLM

nsbk•2h ago
My thoughts exactly! Although I may end up getting some Mini M1/M2 variant with Asahi Linux instead
DeathArrow•4h ago
Long time ago Intel predicted ARM won't be a big deal and they sold XScale to Marvell.
KeplerBoy•4h ago
It's only a big deal because of x86 licensing.
DeathArrow•3h ago
I'm curious what operating system will this run. Linux, Android, Windows?
criticalfault•3h ago
If it was ordered by Microsoft and paid by Microsoft to be developed, fine.

But, wouldn't it make more sense for amd to go into risc-v at this point of time?

jmspring•3h ago
there are two predominant architectures right now (right or wrong), amd64 and arm64. Why the F would amd invest in risc when their gpus are well above intel in specs and explain the biz market approach for risc...
darkamaul•3h ago
Better (or simply more) ARM processors, no matter who makes them, are a win. They tend to be far more power-efficient, and with performance-per-watt improving each generation, pushing for wider ARM adoption is a practical step toward lowering overall energy consumption.
coffeebeqn•2h ago
How is running desktop Linux on these?
hmlwilliams•2h ago
I run desktop linux via postmarketOS on a Lenovo Duet 5 (Snapdragon 7c). It isn't the most powerful device and the webcam doesn't work but other than that it works well and battery life is excellent
fransje26•24m ago
> the webcam doesn't work

But.. ..why? Of all things, I would have expected the webcam to not be cpu-related..

avhception•10m ago
IIRC, it's because the ARM designs tend to use camera modules that come from smartphone-land.

Cameras used on x86-64 usually just work using that usb webcam standard driver (what is that called again? uvcvideo?). But these smartphone-land cameras don't adhere to that standard, they probably don't connect using USB. They are designed to be used with the SoC vendor's downstream fork of Android or whatever, using proprietary blobs.

ahoka•2h ago
Are ARM processors inherently power efficient? I doubt.

Performance per watt is increasing due to the lithography.

Also, Devon’s paradox.

ggm•2h ago
Aside from lithography there's clever design. I don't think you can quantify that but it's not nothing.
eb0la•1h ago
Actually power efficiency was a side effect of having a straightforward design in the first ARM processor. The BBC needed a cheap (but powerful) processor for the Acort computer and a RISC chip was When ARM started testing their processor, they found out it draw very little power...

... the rest is history.

jorvi•1h ago
They aren't inherently power efficient because of technical reasons, but because of design culture reasons.

Traditionally x86 has been built powerful and power hungry and then designers scaled the chips down whereas it's the opposite for ARM.

For whatever reason, this also makes it possible to get much bigger YoY performance gains in ARM. The Apple M4 is a mature design[0] and yet a year later the M5 is CPU +15% GPU +30% memory bandwidth +28%.

The Snapdragon Elite X series is showing a similar trajectory.

So Jim Keller ended up being wrong that ISA doesn't matter. Its just that it's the people in the ISA that matter, not the silicon.

[0] its design traces all the way back to the A12 from 2018, and in some fundamental ways even to the A10 from 2016.

IshKebab•1h ago
Do you have any actual evidence for that? Intel does care about power efficiency - they've been making mobile CPUs for decades. And I don't think they are lacking intelligent chip designers.

I would need some strong evidence to make me think it isn't the ISA that makes the difference.

high_na_euv•1h ago
Isn't Lunar Lake first mobile chip with focus on energy eff? And it is reasonably efficient

We will see how big improvement is it's successor panther lake in January on 18A node

>I would need some strong evidence to make me think it isn't the ISA that makes the difference.

It is like saying that Java syntax is faster than C# syntax.

Everything is about the implementation: compiler, jit, runtime, stdlib, etc

If you spent decades of effort on peformance and ghz then don't be shocked that someone who spent decades on energy eff is better in that category

jorvi•23m ago
https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter

Basically, x86 uses op caches and micro ops reduces instruction decoder use, the decoder itself doesn't use significant power, and ARM also uses op caches and micro ops to improve performance. So there is little effective difference. Micro ops and branch prediction is where the big wins are and both ISAs use them extensively.

If the hardware is equal and the designers are equally skilled, yet one ISA consistently pulls ahead, that leads to the likely conclusion that the way the chips get designed must be different for the winning ISA.

For what it's worth, the same is happening in GPU land. Infamously, the M1 Ultra GPU at 120W equals the performance of the RTX 3090 at 320W (!).

That same M1 also smoked an Intel i9.

high_na_euv•1h ago
As far as I know people aren't part of ISA :)
pjmlp•1h ago
With the caveat that ARM isn't a industry standard like PC has become, thus while propritary OSes can thrive, FOSS has a much higher challenge other than OEM specific distros or downstream forks.

Stuff like this, https://www.amazon.de/-/en/Microsoft-Corporation/dp/15723171...

antonkochubey•10m ago
There are the Arm SystemReady and ServerReady requirements/specifications that enable generic board support by the OSes.
pjmlp•1m ago
Thanks, I thought we were still on device trees and little else.
high_na_euv•1h ago
ISA is not that relevant, it is all about what you want to achieve with your CPU
spiderfarmer•3h ago
I don't think I'm using x86 for anything anymore. All the PC's in my home are ARM, the phones are ARM, the TV's are ARM and even the webservers I'm running are ARM nowadays.
dangus•2h ago
Wow. This could really be a big deal, especially if it’s more of an openly available product than what Qualcomm has on offer.

For me personally I’d love it if this made it to a framework mainboard. I wouldn’t even mind the soldered memory, I understand the technical tradeoff there.

heavyset_go•2h ago
I want a hybrid APU, perhaps an x86 host with ARM co-processors that can be used to run arm64 code natively/do some clever virtualization. Or maybe the other way around, with ARM hosts and x86 co-processors. Or they can do some weird HMP stuff instead of co-processors.
signa11•2h ago
risc-v would have been so much cooler.
signa11•2h ago
why the downvote ? an explanation please...thank you!
ggm•2h ago
Rosetta shows translation works. Why complicate the os with multiple ISA?
jillesvangurp•1h ago
Or put differently, why bake the CPU instruction sets into the chips? What Apple has shown is that emulating x86 can actually rival or be faster than a natively running x86 chip. There are currently two major ones (ARM, x86) and an up-and-coming minor one (e.g. RISC-V), and lots of legacy ones (SPARC, MIPS, PowerPC, etc.). All these can be emulated. Native compilation is an optimization that can happen at build time (traditional compilers), at distribution time (Android stores do this), just before the first run (Rosetta), or on the fly (QEMU).

Chip manufacturers need to focus on making power-efficient, high-performance workhorses. Apple figured this out first and got frustrated enough with Intel, who was more preoccupied with vendor lock-in than with doing the one thing they were supposed to do: developing best-in-class chips. The jump from x86 to M1 completely destroyed Intel’s reputation on that front. Turns out all those incremental changes over the years were them just moving deck chairs around. AMD was just tagging along and did not offer much more than them. They too got sidelined by Apple’s move. They never were much better in terms of efficiency and speed. So them now maybe getting back into ARM chips is a sign that times are changing and x86 is becoming a legacy architecture.

This shouldn’t matter. Both Apple and Microsoft have emulation capability. Apple is of course retiring theirs, but that’s more of a prioritization/locking strategy than it is for technical reasons. This is the third time they’ve pulled off emulation as a strategy to go to a new architecture: Motorola 68000 to PowerPC to x86 to ARM. Emulation has worked great for decades. It has broken the grip X86 has had on the market for four decades.

GCUMstlyHarmls•2h ago
Im too dumb to know why?

Why have both to run native arm64 code? Nearly anything you'd want is cross compiled/compilable (save some macOS stuff but that's more than just CPU architecture).

My understanding is that ARM chips can be more efficient? Hence them being used in phones etc.

I guess it would let you run android stuff "natively"?

Or perhaps you imagine running Blender in x64 mode and discord in the low wattage ARM chip?

pantulis•2h ago
Anybody else finds it very confusing that this is called Sound Wave and it's not a specific chip for sound synthesis applications?
rwmj•2h ago
I was hoping it'd be a very cool soundcard, perhaps with unlimited General Midi channels.
fecal_henge•2h ago
10^5 orchestra hit polyphony.
atoav•2h ago
Finally a realistic helicopter sound?
bitwize•2h ago
Perhaps it is named after the Decepticon?
xoac•1h ago
Not sure what their intention is of course, but nowadays there is A LOT of Cortexes in various sound gear. Plenty in things like Eurorack but also outboard equipment like the Eventide H9000 etc.
noelwelsh•39m ago
From the name you'd expect a simple sound card, but look deeper and there is more than meets the eye [1]

[1]: https://en.wikipedia.org/wiki/Soundwave_(Transformers)

rwmj•2h ago
I have an AMD Seattle in a cupboard somewhere. https://rwmj.wordpress.com/2017/06/01/amd-seattle-lemaker-ce...
sylware•2h ago
They should move to risc-v instead.
sydbarrett74•2h ago
That will probably happen eventually, but right now RISC-V only has the hp for embedded or peripheral uses. It will continue to nip at ARM’s heels for the next 5-10 years.
gsliepen•2h ago
Could be an interesting chip for a future Raspberry Pi model? With Radeon having nice open source drivers, it would be easy to run a vanilla Linux OS on it. The TDP looks compatible as well.
xbmcuser•1h ago
Oh I hope the price is low enough that this be a real media box chip competitor fir streaming devices. Nvidia Shield Tegra chip from 2015 is still one of the best in this space. And with nvidia making all the AI money is not interested in making a new device. Apple TV the only real alternative does not support audio passthrough so is not as open as android or Linux media boxes.
fithisux•1h ago
Now imagine people having written Assembly x86/x64 desktop apps or inline in native code.

They will be very happy.

moffkalast•1h ago
> Memory support is another highlight: the chip integrates a 128-bit LPDDR5X-9600 controller and will reportedly include 16 GB of onboard RAM, aligning with current trends in unified memory designs used in ARM SoCs. Additionally, the APU carries AMD’s fourth-generation AI engine, enabling on-device inference tasks

128-bit LPDDR5X-9600 is about 150 GB/s, that's 50% better than an Orin NX. If they can sell these things for less than like $500 then it would be a pretty decent deal for edge inference. 16 GB is ridiculously tiny for the use case though when it's actually more like 15 in practice and the OS and other stuff then takes another two or three, leaving you with like 12 maybe. Hopefully there's a 32 GB model eventually...

thoughtsyntax•39m ago
It’s exciting to see AMD trying ARM again, competition always brings better chips for everyone.