frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Google Workspace CLI

https://github.com/googleworkspace/cli
473•gonzalovargas•7h ago•173 comments

You Just Reveived

https://dylan.gr/1772520728
99•djnaraps•3h ago•29 comments

MacBook Neo

https://www.apple.com/newsroom/2026/03/say-hello-to-macbook-neo/
1737•dm•17h ago•2029 comments

The Self-Help Trap: What 20 Years of "Optimizing" Has Taught Me

https://tim.blog/2026/03/04/the-self-help-trap/
13•bonefishgrill•51m ago•1 comments

Relax NG is a schema language for XML

https://relaxng.org/
13•Frotag•1h ago•2 comments

Building a new Flash

https://bill.newgrounds.com/news/post/1607118
505•TechPlasma•11h ago•136 comments

Show HN: Poppy – a simple app to stay intentional with relationships

https://poppy-connection-keeper.netlify.app/
69•mahirhiro•3h ago•17 comments

AMD will bring its "Ryzen AI" processors to standard desktop PCs for first time

https://arstechnica.com/gadgets/2026/03/amd-ryzen-ai-400-cpus-will-bring-upgraded-graphics-to-soc...
39•Bender•2d ago•36 comments

Something is afoot in the land of Qwen

https://simonwillison.net/2026/Mar/4/qwen/
639•simonw•15h ago•284 comments

What Python's asyncio primitives get wrong about shared state

https://www.inngest.com/blog/no-lost-updates-python-asyncio
41•goodoldneon•4h ago•24 comments

Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies’

https://techcrunch.com/2026/03/04/anthropic-ceo-dario-amodei-calls-openais-messaging-around-milit...
495•SilverElfin•7h ago•274 comments

NRC issues first commercial reactor construction approval in 10 years [pdf]

https://www.nrc.gov/sites/default/files/cdn/doc-collection-news/2026/26-028.pdf
101•Anon84•9h ago•64 comments

Dulce et Decorum Est (1921)

https://www.poetryfoundation.org/poems/46560/dulce-et-decorum-est
115•bikeshaving•10h ago•66 comments

Humans 40k yrs ago developed a system of conventional signs

https://www.pnas.org/doi/10.1073/pnas.2520385123
108•bikenaga•15h ago•48 comments

Picking Up a Zillion Pieces of Litter

https://www.sixstepstobetterhealth.com/litter.html
96•colinbartlett•3d ago•39 comments

Moss is a pixel canvas where every brush is a tiny program

https://www.moss.town/
239•smusamashah•21h ago•25 comments

Malm Whale

https://www.atlasobscura.com/places/malm-whale
25•thunderbong•4d ago•11 comments

NanoGPT Slowrun: Language Modeling with Limited Data, Infinite Compute

https://qlabs.sh/slowrun
156•sdpmas•13h ago•27 comments

“It turns out” (2010)

https://jsomers.net/blog/it-turns-out
277•Munksgaard•16h ago•87 comments

Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic

https://techcrunch.com/2026/03/04/jensen-huang-says-nvidia-is-pulling-back-from-openai-and-anthro...
134•jnord•5h ago•51 comments

The L in "LLM" Stands for Lying

https://acko.net/blog/the-l-in-llm-stands-for-lying/
45•LorenDB•3h ago•12 comments

Qwen3.5 Fine-Tuning Guide

https://unsloth.ai/docs/models/qwen3.5/fine-tune
323•bilsbie•19h ago•74 comments

The View from RSS

https://www.carolinecrampton.com/the-view-from-rss/
108•Curiositry•11h ago•29 comments

Chaos and Dystopian news for the dead internet survivors

https://www.fubardaily.com
85•anonnona8878•6h ago•29 comments

Relicensing with AI-Assisted Rewrite

https://tuananh.net/2026/03/05/relicensing-with-ai-assisted-rewrite/
68•tuananh•2h ago•54 comments

Libre Solar – Open Hardware for Renewable Energy

https://libre.solar
250•evolve2k•3d ago•73 comments

Raspberry Pi Pico as AM Radio Transmitter

https://www.pesfandiar.com/blog/2026/02/28/pico-am-radio-transmitter
93•pesfandiar•4d ago•31 comments

Glaze by Raycast

https://www.glazeapp.com/
214•romac•18h ago•130 comments

Roboflow (YC S20) Is Hiring a Security Engineer for AI Infra

https://roboflow.com/careers
1•yeldarb•14h ago

Was Windows 1.0's lack of overlapping windows a legal or a technical matter?

https://retrocomputing.stackexchange.com/questions/32511/was-windows-1-0s-lack-of-overlapping-win...
81•SeenNotHeard•11h ago•56 comments
Open in hackernews

AMD will bring its "Ryzen AI" processors to standard desktop PCs for first time

https://arstechnica.com/gadgets/2026/03/amd-ryzen-ai-400-cpus-will-bring-upgraded-graphics-to-socket-am5-desktops/
39•Bender•2d ago

Comments

cebert•2d ago
AMD marketing is hoping the “AI” branding is a positive. Antidotally, I know many consumers who are not sold on AI. This branding could actually hurt sales.
himata4113•1h ago
I'd actually love to have an NPU that isn't useless on my 285k.
aljgz•1h ago
We are dealing with a hype, but the reality is that AI would change everything we do. Local models will start being helpful in [more] unobtrusive ways. Machines with decent local NPUs would be usable for longer before they feel too slow.
wood_spirit•1h ago
So I’ve got a lot warmer to believing that AI can be a better programmer than most programmers these days. That is a low bar :). The current approach to AI can definitely change how effective a programmer is: but then it is up to the market to decide if we need so many programmers. The talk about how each company is going to keep all the existing programmers and just expect productivity multipliers is just what execs are currently telling programmers; that might change when the same is execs are talking to shareholders etc.

But does this extrapolate to the current way of doing AI being in normal life in a good way that ends up being popular? The way Microsoft etc is trying to put AI in everything is kinda saying no it isn’t actually what users want.

I’d like voice control in my PC or phone. That’s a use for these NPUs. But I imagine it is like AR- what we all want until it arrives and it’s meh.

snovv_crash•1h ago
When I interview people for a job I'm not looking to hire an average programmer, though.
vbezhenar•1h ago
For some people maybe. I don't want to use local AI and NPU will be dead weight for me. Can't imagine a single task in my workflow that would benefit from AI.

It's similar to performance/effiency cores. I don't need power efficiency and I'd actually buy CPU that doesn't make that distinction.

orbital-decay•1h ago
Also similar to GPU + CPU on the same die, yet here we are. In a sense, AI is already in every x86 CPU for many years, and you already benefit from using it locally (branch prediction in modern processors is ML-based).
wtallis•43m ago
> Also similar to GPU + CPU on the same die, yet here we are.

I think the overall trend is now moving somewhat away from having the CPU and GPU on one die. Intel's been splitting things up into several chiplets for most of their recent generations of processors, AMD's desktop processors have been putting the iGPU on a different die than the CPU cores for both of the generations that have an iGPU, their high-end mobile part does the same, even NVIDIA has done it that way.

Where we still see monolithic SoCs as a single die is mostly smaller, low-power parts used in devices that wouldn't have the power budget for a discrete GPU. But as this article shows, sometimes those mobile parts get packaged for a desktop socket to fill a hole in the product line without designing an entirely new piece of silicon.

fodkodrasz•1h ago
Never wanted to do high quality voice recognition? No need for face/object detection in near instant speed for your photos, embedding based indexing and RAG for your local documents with free text search where synonyms also work? All locally, real-time, with minimal energy use.

That is fine. Most ordinary users can benefit from these very basic use cases which can be accelerated.

Guess people also said this for video encoding acceleration, and now they use it on a daily basis for video conferencing, for example.

wtallis•52m ago
> Can't imagine a single task in my workflow that would benefit from AI.

You don't do anything involving realtime image, video, or sound processing? You don't want ML-powered denoising and other enhancements for your webcam, live captions/transcription for video, OCR allowing you to select and copy text out of any image, object and face recognition for your photo library enabling semantic search? I can agree that local LLMs aren't for everybody—especially the kind of models you can fit on a consumer machine that isn't very high-end—but NPUs aren't really meant for LLMs, anyways, and there are still other kinds of ML tasks.

> It's similar to performance/effiency cores. I don't need power efficiency and I'd actually buy CPU that doesn't make that distinction.

Do you insist that your CPU cores must be completely homogeneous? AMD, Intel, Qualcomm and Apple are all making at least some processors where the smaller CPU cores aren't optimized for power efficiency so much as maximizing total multi-core throughput with the available die area. It's a pretty straightforward consequence of Amdahl's Law that only a few of your CPU cores need the absolute highest single-thread performance, and if you have the option of replacing the rest with a significantly larger number of smaller cores that individually have most of the performance of the larger cores, you'll come out ahead.

throwa356262•43m ago
Is everyone a content creator these days?

Besides, most of what you mentioned doesn't run on NPU anyway. They are usually standard GPU workload.

wtallis•39m ago
None of what I listed was in any way specific to "content creators". They're not the only ones who participate in video calls or take photos.

And on the platforms that have a NPU with a usable programming model and good vendor support, the NPU absolutely does get used for those tasks. More fragmented platforms like Windows PCs are least likely to make good use of their NPUs, but it's still common to see laptop OEMs shipping the right software components to get some of those tasks running on the NPU. (And Microsoft does still seem to want to promote that; their AI PC branding efforts aren't pure marketing BS.)

anematode•24m ago
The issue is that the consumer strongly associates "AI" with LLMs specifically. The fact that machine learning is used to blur your background in a video call, for example, is irrelevant to the consumer and isn't thought of as AI.
skirmish•1h ago
Indeed, I was buying a laptop for my wife, and she was viscerally against "Ryzen AI": I don't want a CPU with builtin AI to spy on my screen all the time!
kijin•1h ago
They can just buy a regular Ryzen 9000 series CPU, then. Maybe add a real graphics card if they're into gaming.
iso-logi•1h ago
8 Core/16 Thread, boosting up to 5.1GHz with iGPU would be pretty neat for a Plex Server or Proxmox Server with a few VMs.
zeroflow•3m ago
As far as I can find, Plex does not support AMD iGPU for transcoding. Jellyfin will work, but support seems rather spotty. For other AI/ML work, it seems like ROCm is up and coming, but support - e.g. for Frigate object detection - is still a work in progress, especially for newer chips.
Mashimo•2m ago
Maybe also Immich, for face and object recognition.
Buttons840•1h ago
Do we expect special AI processors to diverge from GPUs? Like, processors that can do parallel neural network computations but cannot draw graphics?
dagmx•1h ago
That’s already the norm no?

Pretty much every hardware vendor has an NPU

c0balt•1h ago
That is already the case with datacenter "GPUs". A A100, MI300 or Intel PVC/Gaudi does not have useful graphics performance nor capabilities. Coprocessors ala NPU/VPU are also on the rise again for CPUs.
elcritch•49m ago
Great now I’m envisioning a rich guy using an A100 as his desktop GPU just to show off. Which begs the question if that’s even possible.
userbinator•8m ago
It has no video output.
jiggawatts•1h ago
Yes.

Even the latest NVIDIA Blackwell GPUs are general purpose, albeit with negligible "graphics" capabilites. They can run fairly arbitrary C/C++ code with only some limitations, and the area of the chip dedicated to matrix products (the "tensor units") is relatively small: less than 20% of the area!

Conversely, the Google TPUs dedicate a large area of each chip to pure tensor ops, hence the name.

This is partly why Google's Gemini is 4x cheaper than OpenAI's GPT5 models to serve.

Jensen Huang has said in recent interviews that he stands by the decision to keep the NVIDIA GPUs more general purpose, because this makes them flexible and able to be adapted to future AI designs, not just the current architectures.

That may or may not pan out.

I strongly suspect that the winning chip architecture will have about 80% of its area dedicated to tensor units, very little onboard cache, and model weights streamed in from High Bandwidth Flash (HBF). This would be dramatically lower power and cost compared to the current hardware that's typically used.

Something to consider is that as the size of matrices scales up in a model, the compute needed to perform matrix multiplications goes up as the cube of their size, but the other miscellaneous operations such as softmax, relu, etc.. scale up linearly with the size of the vectors being multiplied.

Hence, as models scale into the trillions of parameters, the matrix multiplications ("tensor" ops) dominate everything else.

tuukkah•1h ago
Meanwhile, the corresponding "non-standard" desktop PC is the Framework Desktop, which with the Ryzen AI Max+ 395 can use 120GB of its 128GB RAM for the GPU: How to Run a One Trillion-Parameter LLM Locally: An AMD Ryzen™ AI Max+ Cluster Guide https://www.amd.com/en/developer/resources/technical-article...
hedgehog•21m ago
Minisforum MS-S1 is the same chip but has a PCIe slot suitable for a network card.
snovv_crash•1h ago
Hoe much dedicated cache do these NPUs have? Because it's easy enough to saturate the memory bandwidth using the CPU for compute, never mind the GPU. Adding dark silicon for some special operations isn't going to make out memory bandwidth faster.
FpUser•1h ago
Well, for me personally it is a meh until RAM prices go down. Suddenly, decent PC has turned from a tool accessible to average Joe to a luxury item
bitwize•1h ago
Narrator: The RAM prices did not, in fact, go down.
mcraiha•1h ago
These are mobile chips shoehorned into AM5. They aren't very good e.g. for gaming purposes. https://videocardz.com/newz/amd-ryzen-ai-400-does-not-suppor...
bcraven•38m ago
Presumably that's why the subheading is:

>First wave of Ryzen AI desktop CPUs targets business PCs rather than DIYers.

poly2it•57m ago
The Ryzen AI line is actually great if deployed to an entire org as the bottom tier, as it garuantees every device has a 50 TOPs NPU. We deploy local software at $STARTUP and this makes deployment to a Windows corp more predictable.
lelanthran•35m ago
It doesn't sound as impressive as I wanted :-(

I wanted a better strix halo (which has 128GB unified RAM and 40cu on the 8080s (or something) iGPU).

This looks like normal Ryzen mobile chips + but with fewer cus.

wtallis•16m ago
Putting Strix Halo into the AM5 socket would make no sense. Half the memory controllers would be orphaned and the GPU would be severely bandwidth-starved (assuming that the memory controller on Strix Halo actually supports DDR5 and not just LPDDR5).
noelwelsh•9m ago
Yeah the next generation of Strix Halo is what would get me excited. I think right now TSMC has no capacity, so maybe we have to wait another year. Kinda ironic that all CPU/RAM capacity is being sold to LLM companies, and as a result we can't get the hardware needed for good local LLMs.
a012•1m ago
> This makes them AMD’s first desktop chips to qualify for Microsoft’s Copilot+ PC label, which enables a handful of unique Windows 11 features like Recall and Click to Do.

Microsoft: "Friendship ended with Intel, now AMD is my best friend"