Their MI300s already beat them, 400s coming soon.
Note: previous testing I did was on a single (8x) MI300X node, currently I'm doing testing on just a single MI300X GPU, so not quite apples-to-apples, multi-GPU/multi-node training is still a question mark, just a single data point.
OTOH, by emphasizing datacenter hardware, they can cover a relatively small portfolio and maximize access to it via cloud providers.
As much as I'd love to see an entry-level MI350-A workstation, that's not something that will likely happen.
Which SDKs do they offer that can do neural network inference and/or training? I'm just asking because I looked into this a while ago and felt a bit overwhelmed by the number of options. It feels like AMD is trying many things at the same time, and I’m not sure where they’re going with all of it.
Plus their consumer card support is questionable to say the least. I really wish it was a viable alternative, but swapping to CUDA really saved me some headaches and a ton or time.
Having to run MiOpen benchmarks for HIP can take forever.
halJordan•9h ago
AMD deserves exactly zero of the credulity this writer heaps onto them. They just spent four months not supporting their rdna4 lineup in rocm after launch. AMD is functionally capable of day120 support. None of the benchmarks disambiguated where the performance is coming from. 100% they are lying on some level, representing their fp4 performance against fp 8/16.
pclmulqdq•9h ago
caycep•8h ago
fooblaster•8h ago
fc417fc802•7h ago
zombiwoof•5h ago
viewtransform•7h ago
"25 complimentary GPU hours (approximately $50 US of credit for a single MI300X GPU instance), available for 10 days. If you need additional hours, we've made it easy to request additional credits."
archerx•7h ago
zombiwoof•5h ago
booder1•7h ago
They should care about the availability of their hardware so large customers don't have to find and fix their bugs. Let consumers do that...
echelon•7h ago
Makes it a little hard to develop for without consumer GPU support...
stingraycharles•6h ago
cma•6h ago
wmf•6h ago
selectodude•6h ago
jiggawatts•5h ago
stingraycharles•5h ago
This is why it’s so important AMD gets their act together quickly, as the benefits of these kind of things are measured in years, not months.
danielheath•4h ago
moffkalast•3h ago
lhl•3h ago
Flash Attention: academia, 2y behind for AMD support
bitsandbytes: academia, 2y behind for AMD support
Marlin: academia, no AMD support
FlashInfer: acadedmia/startup, no AMD
ThunderKittens: academia, no AMD support
DeepGEMM, DeepEP, FlashMLA: ofc, nothing from China supports AMD
Without the long tail AMD will continue to always be in a position where they have to scramble to try to add second tier support years later themselves, while Nvidia continues to get all the latest and greatest for free.
This is just off the top of my head on the LLM side where I'm focused on, btw. Whenever I look at image/video it's even more grim.
jimmySixDOF•2h ago
littlestymaar•2h ago
I mean of they want to stay at a fraction of the market value and profit of their direct competitor, good for them.
shmerl•6h ago
Different architectures was probably a big reason for the above issue.
fooker•5h ago
jchw•6h ago
It's baffling that AMD is the same company that makes both Ryzen and Radeon, but the year-to-date for Radeon has been very good, aside from the official ROCm support for RDNA4 taking far too long. I wouldn't get overly optimistic; even if AMD finally committed hard to ROCm and Radeon it doesn't mean they'll be able to compete effectively against NVIDIA, but the consumer showing wasn't so bad so far with the 9070 XT and FSR4, so I'm cautiously optimistic they've decided to try to miss some opportunities to miss opportunities. Let's see how long these promises last... Maybe longer than a Threadripper socket, if we're lucky :)
[1]: https://www.phoronix.com/news/AMD-ROCm-H2-2025
roenxi•5h ago
I dunno; I suppose they can execute on server parts. But regardless, a good plan here is to let someone else go first and report back.
zombiwoof•5h ago
AMD is a marketing company now
ethbr1•1h ago
You mean Ryan Smith of late AnandTech fame?
https://www.anandtech.com/author/85/