frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Brain-like LLM to replace Transformers

https://arxiv.org/abs/2509.26507
82•thatxliner•1h ago•18 comments

Democracy and the open internet die in daylight

https://heatherburns.tech/2025/10/22/democracy-and-the-open-internet-die-in-daylight/
78•speckx•1h ago•28 comments

Linux Capabilities Revisited

https://dfir.ch/posts/linux_capabilities/
10•Harvesterify•32m ago•0 comments

MinIO stops distributing free Docker images

https://github.com/minio/minio/issues/21647#issuecomment-3418675115
323•LexSiga•8h ago•194 comments

Die shots of as many CPUs and other interesting chips as possible

https://commons.wikimedia.org/wiki/User:Birdman86
97•uticus•4d ago•17 comments

Chezmoi introduces ban on LLM-generated contributions

https://www.chezmoi.io/developer-guide/
8•singiamtel•1h ago•1 comments

AI assistants misrepresent news content 45% of the time

https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
24•sohkamyung•42m ago•11 comments

Internet's biggest annoyance: Cookie laws should target browsers, not websites

https://nednex.com/en/the-internets-biggest-annoyance-why-cookie-laws-should-target-browsers-not-...
229•SweetSoftPillow•2h ago•283 comments

Tesla Recalls Almost 13,000 EVs over Risk of Battery Power Loss

https://www.bloomberg.com/news/articles/2025-10-22/tesla-recalls-almost-13-000-evs-over-risk-of-b...
55•zerosizedweasle•1h ago•27 comments

Go subtleties

https://harrisoncramer.me/15-go-sublteties-you-may-not-already-know/
126•darccio•1w ago•68 comments

Infracost (YC W21) Hiring First Dev Advocate to Shift FinOps Left

https://www.ycombinator.com/companies/infracost/jobs/NzwUQ7c-senior-developer-advocate
1•akh•2h ago

Jaguar Land Rover hack cost UK economy an estimated $2.5B

https://www.reuters.com/sustainability/boards-policy-regulation/jaguar-land-rover-hack-cost-uk-ec...
52•giuliomagnifico•1h ago•50 comments

Evaluating the Infinity Cache in AMD Strix Halo

https://chipsandcheese.com/p/evaluating-the-infinity-cache-in
116•zdw•10h ago•47 comments

Show HN: Cadence – A Guitar Theory App

https://cadenceguitar.com/
103•apizon•1w ago•18 comments

French ex-president Sarkozy begins jail sentence

https://www.bbc.com/news/articles/cvgkm2j0xelo
202•begueradj•8h ago•264 comments

Tiny sugar spoons are popping up on NYC fast-food menus

https://gothamist.com/news/tiny-sugar-spoons-are-popping-up-on-nyc-fast-food-menus-youre-being-wa...
14•nodumbideas•29m ago•3 comments

Greg Newby, CEO of Project Gutenberg Literary Archive Foundation, has died

https://www.pgdp.net/wiki/In_Memoriam/gbnewby
273•ron_k•5h ago•47 comments

Cigarette-smuggling balloons force closure of Lithuanian airport

https://www.theguardian.com/world/2025/oct/22/cigarette-smuggling-balloons-force-closure-vilnius-...
13•n1b0m•56m ago•0 comments

Knocker, a knock based access control system for your homelab

https://github.com/FarisZR/knocker
40•xlmnxp•5h ago•66 comments

Distributed Ray-Tracing

https://www.4rknova.com//blog/2019/02/24/distributed-raytracing
14•ibobev•5d ago•6 comments

Starcloud

https://blogs.nvidia.com/blog/starcloud/
89•jonbaer•2h ago•120 comments

Patina: a Rust implementation of UEFI firmware

https://github.com/OpenDevicePartnership/patina
34•hasheddan•1w ago•5 comments

Ghostly swamp will-O'-the-wisps may be explained by science

https://www.snexplores.org/article/swamp-gas-methane-will-o-wisp-chemistry
10•WaitWaitWha•1w ago•5 comments

The Stagnant Order. and the End of Rising Powers

https://www.foreignaffairs.com/united-states/stagnant-order-michael-beckley
20•csomar•1h ago•2 comments

rlsw – Raylib software OpenGL renderer in less than 5k LOC

https://github.com/raysan5/raylib/blob/master/src/external/rlsw.h
218•fschuett•17h ago•79 comments

LLMs can get "brain rot"

https://llm-brain-rot.github.io/
425•tamnd•23h ago•256 comments

Power over Ethernet (PoE) basics and beyond

https://www.edn.com/poe-basics-and-beyond-what-every-engineer-should-know/
202•voxadam•6d ago•151 comments

Ask HN: Our AWS account got compromised after their outage

347•kinj28•22h ago•83 comments

NASA chief suggests SpaceX may be booted from moon mission

https://www.cnn.com/2025/10/20/science/nasa-spacex-moon-landing-contract-sean-duffy
365•voxleone•1d ago•967 comments

Evaluating Argon2 adoption and effectiveness in real-world software

https://arxiv.org/abs/2504.17121
22•pregnenolone•1w ago•10 comments
Open in hackernews

Evaluating the Infinity Cache in AMD Strix Halo

https://chipsandcheese.com/p/evaluating-the-infinity-cache-in
116•zdw•10h ago

Comments

andrewstuart•9h ago
Despite this APU being deeply interesting to people who want to do local AI, anecdotally I hear that it’s hard to get models to run on it.

Why would AMD not have focused everything it possibly has on demonstrating and documenting and fixing and showing and smoothing the path for AI on their systems?

Why does AMD come across as so generally clueless when it comes to giving developers what they want, compared to Nvidia?

AMD should do whatever it takes to avoid these sort of situations:

https://youtu.be/cF4fx4T3Voc?si=wVmYmWVIya4DQ8Ut

typpilol•9h ago
Any idea what makes models hard to run on it?

Just general compatibility between Nvidia and AMD for stuff that was built for Nvidia originally?

Or do you mean something else?

cakealert•8h ago
It's not the models, it's the tooling. Models are just weights and an architecture spec. The tooling is how to load and execute the model on hardware.

Some UX-oriented tooling has sort of solved this problem and will run on AMD: LM Studio

pella•9h ago
"The AMD Ryzen™ AI Max+ processor is the first (and only) Windows AI PC processor capable of running large language models up to 235 Billion parameters in size. This includes support for popular models such as: Open AI's GPT-OSS 120B and Z.ai Org's GLM 4.5 Air. The large unified memory pool also allows models (up to 128 Billion parameters) to run at their maximum context length (which is a memory intensive feature) - enabling and empowering use cases involving tool-calling, MCP and agentic workflows - all available today. "

  GPT-OSS 120B MXFP4              : up to 44 tk/s
  GPT-OSS 20B MXFP4               : up to 62 tk/s
  Qwen3 235B A22B Thinking Q3 K L : up to 14 tk/s
  Qwen3 Coder 30B A3B Q4 K M      : up to 66 tk/s
  GLM 4.5 Air Q4 K M              : up to 16 tk/s
(performance tk/s ) : https://www.amd.com/en/blogs/2025/amd-ryzen-ai-max-personal-...
andrewstuart•8h ago
I’m not sure why you are telling me this.
YuukiRey•8h ago
It’s an example of AMD catering to the AI crowd to somewhat refute your claim that they are clueless.

Not exactly a gigantic mental leap.

spockz•7h ago
I think it actually reinforces the point. They know how to cater for the AI Crowd in terms of hardware but still drop the ball on the software level.
storus•1h ago
Strix Halo can only allocate 96GB RAM to the GPU. So GPT-OSS 120B can be ran only at Q6 at best (but activations would need to be partially stored in the CPU mem then).
ondra•1h ago
GPT-OSS 120B uses native 4 bit representation, so it fits fine.
vid•36m ago
It can use only 96GB RAM on Windows, on Linux people have allocated up to 120GB. Here's one source: https://www.reddit.com/r/LocalLLaMA/comments/1nmlluu/comment...
lmm•8h ago
Hardware companies are extremely bad at valuing software. The mystery isn't that AMD is bad at it, the mystery is that NVidia is good at it. They also have a probably 30-40 year head start. AMD is trying as much as they can, but changing culture takes time.
DeepYogurt•8h ago
Intel and arm are also pretty good at it. amd feels like the outlier here
AnthonyMouse•5h ago
ARM, and even moreso the companies that make ARM devices, are terrible at it. And there's a reason for that.

The customers of hardware companies generally don't want to get proprietary software from them, because everybody knows that if they do, the hardware company will try to use it as a lock-in strategy. So if you make something which is proprietary but not amazing, nobody wants to touch it.

There are two ways around this.

The first is that you embrace open source. This is what Intel has traditionally done and it works really well when your hardware is good enough that it's what people will choose when the software is a commodity. It also means you don't have to do all the work yourself because if you're not trying to lock people in then all the community work that normally goes to trying to reverse engineer proprietary nonsense instead goes into making sure that the open source software that runs on your hardware is better than the proprietary software that runs on your competitor's.

The second is that you spend enough money on lock-in software that people are willing to use it. This works temporarily, because it takes a certain amount of time for competitors and the community to make a decent alternative, but what usually happens after that is that you have a problem because you were expecting there to be a moat and then ten thousand people showed up to each throw in a log or a bag of wet cement. Before too long the moat is filled in and you can't it back because it was premised on your thing working and their thing not, so once their thing works, that's the part that isn't under your control. And at that point the customers have a choice and the customers don't like you.

The problem AMD has is that they were kinda sorta trying to do both in GPUs. They'd make some things open source but also keep trying to hide the inner workings of the firmware from the public, which is what people need in order to allow third parties to make great software for them. But the second strategy was never going to work for AMD because a decade ago they didn't have the resources to even try and now Nvidia is the incumbent and the underdog can't execute a lock-in strategy. But the open source thing works fine here and indeed gets everyone on their side and then it's them and the whole world against Nvidia instead of just them against Nvidia. Which they're gradually starting to figure out.

rjsw•1h ago
I think ARM is trying to get better at it, they are recruiting software people, that won't have much effect on the drivers for the bits of ARM SoCs that they don't design though.
z3ratul163071•5h ago
it follows that all are good except amd :D

i know, i know we have s...fest sw layers on other chips like the ones from qualcomm, broadcom etc.

ur-whale•3h ago
> the mystery is that NVidia is good at it

They are absolutely not.

Just less bad than AMD, which is an extremely low bar.

naasking•39m ago
> They also have a probably 30-40 year head start.

Holy exaggeration Batman!

oblio•2m ago
30 years ago we had 3Dfx :-))
sidkshatriya•8h ago
> Why does AMD come across as so generally clueless when it comes to giving developers what they want, compared to Nvidia?

I have some theories. Firstly, Nvidia was smart enough to have a unified compute GPU architecture across all its architectures -- consumer and commercial. AMD has this awkward split between CDNA and RDNA. So while AMD is scrambling to get CDNA competitive, RDNA is not getting as much attention as it should. I'm pretty sure its ROCm stack has all kinds of hacks trying to get things working across consumer Radeon devices (which internally are probably not well suited/tuned for compute anyways). AMD is hamstrung by its consumer hardware for now in the AI space.

Secondly, AMD is trying to be "compatible" to Nvidia (via HIP). Sadly this is the same thing that AMD did with Intel in the past. Being compatible is really a bad idea when the market leader (Nvidia) is not interested in standardising and actively pursues optimisations and extensions. AMD will always play catch up.

TL;DR AMD made some bad bets on what the hardware would look like in the future and never thought software was critical like nvidia.

AMD now realizes that software is critical and what future hardware should look like. However it is difficult to catch up with Nvidia, the most valuable company in the world with almost limitless resources to invest in further improving its hardware and software. Even while AMD improves, it will continue to look bad in comparison to Nvidia as state of art keeps getting pushed forward.

positron26•8h ago
While Nvidia's strategic foresight explains why Nvidia is ahead, it doesn't quite capture why the challenge is not something that only AMD can or should tackle alone.

The 7,484+ companies who stand to benefit do not have a good way to split the bill and dogpile a problem that is nearly impossible to progress on without lots of partners adding their perspective via a breadth of use cases. This is why I'm building https://prizeforge.com.

Nvidia didn't do it alone. Industry should not expect or wait on AMD to do it alone. Waiting just means lighting money on fire right now. In return for support, industry can demand more open technology be used across AMD's stack, making overall competition better in response for making AMD competitive.

z3ratul163071•5h ago
we can blame individual bad decisions, but imo it all stems from the culture of viewing software as a cost center and messing it up from there.
JonChesterfield•7h ago
One issue is you need rocm 7 which only just came out.

Another is that people unsportingly write things in cuda.

It'll be a "just works" thing eventually, even if you need software from outside AMD to get it running well.

rbanffy•2h ago
> Another is that people unsportingly write things in cuda.

Whether we like it or not, CUDA is the de-facto standard for these things. I wonder how much effort would it take for a company the size of AMD to dedicate a couple million dollars a year to track CUDA as closely as feasible. A couple million dollars is a rounding error for a leading silicon maker.

JonChesterfield•1h ago
Personally I love it but then I'm also working on a cuda to amdgpu compiler. I'm probably the only person doing that with a strix halo on his desk, should be debugging cuda on it shortly.
aaryamanv•7h ago
You can run ROCm and PyTorch natively for strix halo on both windows and linux. See https://rocm.docs.amd.com/en/docs-7.9.0/index.html
dontlaugh•6h ago
It’s a gaming chip.
green7ea•6h ago
I have a Strix Halo based HP ZBook G1A and it's been pretty easy getting local models to run on it. Training small LLMs on it has been a bit harder but doable as well. Mind you, I 'only' have 64 GB with mine.

Under Linux, getting LM Studio to work using the Vulkan backend was trivial. Llama.cpp was a bit more involved. ROCm worked surprisingly well with Arch — I would credit the package maintainers. The only hard part was sorting out Python packaging for PyTorch (use local packages with system's ROCm).

I wouldn't say it's perfect but it's definitely not as bad as it used to be. I think the biggest downside is the difference in environment when you use this as a dev machine and then run the models on NVIDIA hardware for prod.

ctas•2h ago
Can you share a bit more on the small LLMs you've trained? I'm interested in the applicability of current consumer hardware for local training and finetuning.
green7ea•1h ago
I'm not the AI expert in the company but one of my colleagues creates image segmentation models for our specific use case. I've been able to run the PyTorch training code on my computer without any issues. These are smaller models that are destined to run on Jetson boards so they're limited compared to larger LLMs.

edit: just to be clear, I can't train anything competitive with even the smallest LLMs.

drcongo•3h ago
I don't know why you're getting downvotes on this, my experience matches it. I have an Evo-X2 which has Strix Halo and rocm still doesn't officially support it. "Support" is supposedly coming in 7.0.2 which can be installed as a preview at the moment but people are still getting regular and random GPU Hang errors with it. I'm running Arch and I've had to make a bunch of tasks in a mise.toml so that I don't forget the long list environment variables to override various rocm settings, and even longer list of arcane incantations required to update rocm and PyTorch to versions that actually almost work with each other.
epistasis•8h ago
Great article on performance. This video from a few weeks ago goes into chiplet design a bit more too:

https://youtu.be/maH6KZ0YkXU

joelthelion•8h ago
I don't quite get it. What's so special about having 32MB of cache? Why is it called "infinity"?
noelwelsh•8h ago
This article from the same site goes into the Infinity Cache design in a bit more detail: https://chipsandcheese.com/p/amds-cdna-3-compute-architectur...

The summary is that it's a cache attached to the memory controllers, rather than the CPUs, so it doesn't have to worry about cache coherency so much. This could be useful for shared memory parallelism.

joelthelion•7h ago
Thank you!
jbreitbart•2h ago
Since it is attached to the memory controller, one could argue that it is truly the final level of the cache hierarchy and the term infinity is not only a marketing term.
ot•1h ago
But then you could add another level of slower (but still faster than RAM) and larger cache. So it is after all the CPU caches, but the first of all the memory caches. A more mathematically correct name would be L_omega.
p_l•53m ago
IBM Power8 and Power9 used Centaur chips which had similar "last level cache" onboard.

Because it is last level before the actual ram chips, no coherency involved.

pixelpoet•7h ago
What makes Intel's SMT implementation "hyper"? What makes Mario "Super"? It's just marketing.
themafia•7h ago
> What makes Mario "Super"?

The Super Mushroom power-up.

ahofmann•6h ago
SCNR: What makes the Mushroom power-up "Super"?
fruitworks•6h ago
It's got what mario craves
askl•2h ago
Electrolytes?
arjvik•7h ago
Hyperthreading is technically a level above superscalar?
dripdry45•2h ago
Apparently, Mario actually was the super for the building that they were in. He was not super, he was the super (supervisor) fixing stuff
phire•7h ago
AMD named their memory fabric "infinity fabric" for marketing reasons. So when they developed their memory attached cache solution (which lives in the memory fabric, unlike a traditional cache), the obvious marketing name is "infinity cache"

The main advantage of a memory attached cache is that it's cheaper than a regular cache, and can even be put on a seperate die, allowing you to have much more of it.

AMDs previous memory fabric from the early 2000s was called "Hyper Transport", which has a confusing overlap with Intel's Hyper Threading, but I think AMD actually bet intel to the name by a few years.

typpilol•6h ago
How's latency vs a traditional?
Tuna-Fish•6h ago
It has a higher latency than AMD's L2 caches, but similar compared to Nvidia's L2. [0]

I think AMD might be ditching it for the next gen in exchange for growing the L2's, because the lower latency is beneficial to ray tracing.

[0]: https://substackcdn.com/image/fetch/$s_!LHJo!,f_auto,q_auto:...