frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Rethinking How We Optimize Images for Small and Mid-Sized Websites

https://err0r500.substack.com/p/rethinking-how-we-optimize-images
2•err0r500•3m ago•0 comments

VST 3.8.0 SDK Released (VST3 is now open source, released under the MIT license)

https://forums.steinberg.net/t/vst-3-8-0-sdk-released/1011988
1•crispinh•6m ago•0 comments

Why Not Valetudo?

https://valetudo.cloud/pages/general/why-not-valetudo.html
2•FlynnLivesMattr•9m ago•0 comments

Claude Skills might be a Gamechanger

https://saturnyx.dino.icu/programming-news/2025/10/22/claude-skills-might-be-a-gamechanger.html
2•saturnyx•9m ago•0 comments

Shutdown Enters Day 21 After Latest Funding Vote Fails

https://www.barrons.com/livecoverage/government-shutdown-news-today-102125
1•zerosizedweasle•13m ago•0 comments

Big Companies vs. Startups

https://danluu.com/startup-tradeoffs/
1•aamederen•15m ago•0 comments

Find clarity and calm in modern life with Wiser Life's simple Seneca insights

https://havewiserlife.com/
1•maciekdebiec•15m ago•0 comments

Chemical pollution a threat comparable to climate change, scientists warn

https://www.theguardian.com/environment/2025/aug/06/chemical-pollution-threat-comparable-climate-...
1•Michelangelo11•15m ago•0 comments

Move, Destruct, Forget, and Rust

https://smallcultfollowing.com/babysteps/blog/2025/10/21/move-destruct-leak/
3•Bogdanp•17m ago•0 comments

Trump Demurs, China Hardens

https://www.politico.com/newsletters/weekly-trade/2025/10/20/trump-demurs-china-hardens-00615091
3•zerosizedweasle•17m ago•0 comments

I built an AI that replaces spreadsheets Its name called WealthAI

https://wealth-ai.in/
1•asaws•18m ago•1 comments

After the Turing Test

https://blog.judicata.com/after-the-turing-test-651f3eb1cf54
1•igurari•20m ago•0 comments

Shipping products fast should be the #1 tech leaders' priority

2•meir-avimelec•21m ago•0 comments

Update regarding Vercel service disruption on October 20, 2025

https://vercel.com/blog/update-regarding-vercel-service-disruption-on-october-20-2025
3•MaxLeiter•23m ago•0 comments

Advent of Code 2025, there will be 12 days of puzzles

https://adventofcode.com/
2•praseodym•24m ago•1 comments

I used to like software development but not anymore

https://blog.kulman.sk/i-used-to-like-software-development-but-not-anymore/
2•ingve•25m ago•0 comments

Smart beds began roasting their owners during AWS outage

https://www.pcworld.com/article/2948826/these-smart-beds-began-roasting-their-owners-during-aws-o...
2•branko_d•27m ago•1 comments

Denosing Images of Cats and Dogs with Autoencoders

https://mayberay.bearblog.dev/denosing-images-of-cats-and-dogs-with-autoencoders/
2•mugamuga•27m ago•1 comments

Vertiginous Accounts: Travels in the Air (1871 edition)

https://publicdomainreview.org/collection/travels-in-the-air/
3•prismatic•29m ago•0 comments

Show HN: I built a Bridge wrapper for digital nomads to get paid in stablecoins

https://www.useairsend.com/
2•HenryYWF•29m ago•0 comments

Enchanting Imposters

https://daily.jstor.org/enchanting-imposters/
3•Petiver•30m ago•0 comments

Cyberselfish (1996) [pdf]

https://www.paulinaborsook.com/PDF-disk-1/Cyberselfish_Mother%20Jones.pdf
1•camillomiller•30m ago•0 comments

The Great Butterfly Heist

https://www.theguardian.com/global/2025/oct/04/great-butterfly-heist-how-collector-stole-thousand...
2•lermontov•30m ago•0 comments

Galaxy XR: Opening New Worlds

https://news.samsung.com/global/introducing-galaxy-xr-opening-new-worlds
1•mgh2•30m ago•1 comments

Kotlin Brain Teasers

https://pragprog.com/titles/kotlinbt/kotlin-brain-teasers/
1•dong13•31m ago•0 comments

Axe – CLI tool for interacting with iOS Simulators

https://github.com/cameroncooke/AXe
1•epaga•33m ago•0 comments

Google porting all internal workloads to Arm, with AI helper

https://www.theregister.com/2025/10/22/google_multi_arch_x86_arm_port/
1•beardyw•33m ago•0 comments

Even Xbox developer kits are getting a big price hike

https://www.theverge.com/report/803237/microsoft-xbox-devkit-price-hikes-developers
2•croes•34m ago•0 comments

Death to the "Sprint Review"

https://squirrelsquadron.substack.com/p/death-to-the-sprint-review-event
2•squirrel•35m ago•0 comments

Show HN: I'm 15 and built Gelt – an agentic AI that builds full-stack apps

https://gelt.dev
2•etaigabbai•37m ago•1 comments
Open in hackernews

Evaluating the Infinity Cache in AMD Strix Halo

https://chipsandcheese.com/p/evaluating-the-infinity-cache-in
45•zdw•2h ago

Comments

andrewstuart•1h ago
Despite this APU being deeply interesting to people who want to do local AI, anecdotally I hear that it’s hard to get models to run on it.

Why would AMD not have focused everything it possibly has on demonstrating and documenting and fixing and showing and smoothing the path for AI on their systems?

Why does AMD come across as so generally clueless when it comes to giving developers what they want, compared to Nvidia?

AMD should do whatever it takes to avoid these sort of situations:

https://youtu.be/cF4fx4T3Voc?si=wVmYmWVIya4DQ8Ut

typpilol•1h ago
Any idea what makes models hard to run on it?

Just general compatibility between Nvidia and AMD for stuff that was built for Nvidia originally?

Or do you mean something else?

cakealert•53m ago
It's not the models, it's the tooling. Models are just weights and an architecture spec. The tooling is how to load and execute the model on hardware.

Some UX-oriented tooling has sort of solved this problem and will run on AMD: LM Studio

pella•1h ago
"The AMD Ryzen™ AI Max+ processor is the first (and only) Windows AI PC processor capable of running large language models up to 235 Billion parameters in size. This includes support for popular models such as: Open AI's GPT-OSS 120B and Z.ai Org's GLM 4.5 Air. The large unified memory pool also allows models (up to 128 Billion parameters) to run at their maximum context length (which is a memory intensive feature) - enabling and empowering use cases involving tool-calling, MCP and agentic workflows - all available today. "

  GPT-OSS 120B MXFP4              : up to 44 tk/s
  GPT-OSS 20B MXFP4               : up to 62 tk/s
  Qwen3 235B A22B Thinking Q3 K L : up to 14 tk/s
  Qwen3 Coder 30B A3B Q4 K M      : up to 66 tk/s
  GLM 4.5 Air Q4 K M              : up to 16 tk/s
(performance tk/s ) : https://www.amd.com/en/blogs/2025/amd-ryzen-ai-max-personal-...
andrewstuart•1h ago
I’m not sure why you are telling me this.
YuukiRey•53m ago
It’s an example of AMD catering to the AI crowd to somewhat refute your claim that they are clueless.

Not exactly a gigantic mental leap.

spockz•42m ago
I think it actually reinforces the point. They know how to cater for the AI Crowd in terms of hardware but still drop the ball on the software level.
lmm•1h ago
Hardware companies are extremely bad at valuing software. The mystery isn't that AMD is bad at it, the mystery is that NVidia is good at it. They also have a probably 30-40 year head start. AMD is trying as much as they can, but changing culture takes time.
DeepYogurt•1h ago
Intel and arm are also pretty good at it. amd feels like the outlier here
sidkshatriya•1h ago
> Why does AMD come across as so generally clueless when it comes to giving developers what they want, compared to Nvidia?

I have some theories. Firstly, Nvidia was smart enough to have a unified compute GPU architecture across all its architectures -- consumer and commercial. AMD has this awkward split between CDNA and RDNA. So while AMD is scrambling to get CDNA competitive, RDNA is not getting as much attention as it should. I'm pretty sure its ROCm stack has all kinds of hacks trying to get things working across consumer Radeon devices (which internally are probably not well suited/tuned for compute anyways). AMD is hamstrung by its consumer hardware for now in the AI space.

Secondly, AMD is trying to be "compatible" to Nvidia (via HIP). Sadly this is the same thing that AMD did with Intel in the past. Being compatible is really a bad idea when the market leader (Nvidia) is not interested in standardising and actively pursues optimisations and extensions. AMD will always play catch up.

TL;DR AMD made some bad bets on what the hardware would look like in the future and never thought software was critical like nvidia.

AMD now realizes that software is critical and what future hardware should look like. However it is difficult to catch up with Nvidia, the most valuable company in the world with almost limitless resources to invest in further improving its hardware and software. Even while AMD improves, it will continue to look bad in comparison to Nvidia as state of art keeps getting pushed forward.

positron26•1h ago
While Nvidia's strategic foresight explains why Nvidia is ahead, it doesn't quite capture why the challenge is not something that only AMD can or should tackle alone.

The 7,484+ companies who stand to benefit do not have a good way to split the bill and dogpile a problem that is nearly impossible to progress on without lots of partners adding their perspective via a breadth of use cases. This is why I'm building https://prizeforge.com.

Nvidia didn't do it alone. Industry should not expect or wait on AMD to do it alone. Waiting just means lighting money on fire right now. In return for support, industry can demand more open technology be used across AMD's stack, making overall competition better in response for making AMD competitive.

JonChesterfield•9m ago
One issue is you need rocm 7 which only just came out.

Another is that people unsportingly write things in cuda.

It'll be a "just works" thing eventually, even if you need software from outside AMD to get it running well.

aaryamanv•5m ago
You can run ROCm and PyTorch natively for strix halo on both windows and linux. See https://rocm.docs.amd.com/en/docs-7.9.0/index.html
epistasis•1h ago
Great article on performance. This video from a few weeks ago goes into chiplet design a bit more too:

https://youtu.be/maH6KZ0YkXU

joelthelion•56m ago
I don't quite get it. What's so special about having 32MB of cache? Why is it called "infinity"?
noelwelsh•49m ago
This article from the same site goes into the Infinity Cache design in a bit more detail: https://chipsandcheese.com/p/amds-cdna-3-compute-architectur...

The summary is that it's a cache attached to the memory controllers, rather than the CPUs, so it doesn't have to worry about cache coherency so much. This could be useful for shared memory parallelism.

joelthelion•47m ago
Thank you!
pixelpoet•48m ago
What makes Intel's SMT implementation "hyper"? What makes Mario "Super"? It's just marketing.
themafia•17m ago
> What makes Mario "Super"?

The Super Mushroom power-up.

arjvik•3m ago
Hyperthreading is technically a level above superscalar?
phire•20m ago
AMD named their memory fabric "infinity fabric" for marketing reasons. So when they developed their memory attached cache solution (which lives in the memory fabric, unlike a traditional cache), the obvious marketing name is "infinity cache"

The main advantage of a memory attached cache is that it's cheaper than a regular cache, and can even be put on a seperate die, allowing you to have much more of it.

AMDs previous memory fabric from the early 2000s was called "Hyper Transport", which has a confusing overlap with Intel's Hyper Threading, but I think AMD actually bet intel to the name by a few years.