frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
87•yi_wang•3h ago•25 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
38•RebelPotato•2h ago•8 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
240•valyala•11h ago•46 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
151•surprisetalk•10h ago•150 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
185•mellosouls•13h ago•334 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
68•gnufx•9h ago•55 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
177•AlexeyBrin•16h ago•32 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
161•vinhnx•14h ago•16 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
52•swah•4d ago•95 comments

Total Surface Area Required to Fuel the World with Solar (2009)

https://landartgenerator.org/blagi/archives/127
6•robtherobber•4d ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
128•samasblack•13h ago•76 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
306•jesperordrup•21h ago•95 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
73•momciloo•11h ago•16 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
103•randycupertino•6h ago•220 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
98•thelok•12h ago•22 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
37•mbitsnbites•3d ago•3 comments

Show HN: Axiomeer – An open marketplace for AI agents

https://github.com/ujjwalredd/Axiomeer
11•ujjwalreddyks•5d ago•2 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
570•theblazehen•3d ago•206 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
41•chwtutha•1h ago•7 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
292•1vuio0pswjnm7•17h ago•468 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
133•josephcsible•8h ago•160 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
184•valyala•11h ago•165 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
30•languid-photic•4d ago•12 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
227•limoce•4d ago•125 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
899•klaussilveira•1d ago•276 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
4•duxup•47m ago•0 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
146•speckx•4d ago•228 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
113•zdw•3d ago•56 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
145•videotopia•4d ago•48 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
303•isitcontent•1d ago•39 comments
Open in hackernews

AMD's CDNA 4 Architecture Announcement

https://chipsandcheese.com/p/amds-cdna-4-architecture-announcement
170•rbanffy•7mo ago

Comments

bee_rider•7mo ago
Machine learning is, of course, a massive market and everybody’s focus.

But, does AMD just own the whole HPC stack at this point? (Or would they, if the software was there?).

At least the individual nodes. What’s their equivalent to Infiniband?

phonon•7mo ago
Ultra Ethernet

https://www.tomshardware.com/networking/amd-deploys-its-firs...

https://semianalysis.com/2025/06/11/the-new-ai-networks-ultr...

OneDeuxTriSeiGo•7mo ago
It's also worth noting Ultra Ethernet isn't just an AMD thing. The steering committee for the UEC is made up of basically every hardware manufacturer in the space except Nvidia. And of course Nvidia is a general contributor as well (presumably so they don't get left behind).

https://ultraethernet.org/

jauntywundrkind•7mo ago
Also UltraEthernet went 1.0 (6d ago), had a decent sized comments: https://news.ycombinator.com/item?id=44249190
wmf•7mo ago
Cray Slighshot is even faster than Infiniband.

Now that Nvidia is removing FP64 I assume AMD will have 100% of the HPC market until Fujitsu Monaka comes out.

curt15•7mo ago
Would traditional HPC applications using FP64 gain anything from CDNA4 compared to the MI300A?
wmf•7mo ago
Probably not. They should wait for MI430.
latchkey•7mo ago
Within the node (gpu to gpu), it is infinity fabric.

Externally, it is 8x400G NICs, which is the limitation of PCIeV5 anyway.

We had a guy training SOTA models on 9 of our MI300x boxes just fine. Networking wasn't the slow bit.

robjeiter•7mo ago
When looking at inference is AMD already on par with Nvidia?
moondistance•7mo ago
Yes, for many applications.

Meta, OpenAI, Crusoe, and xAI recently announced large purchases of MI300 chips for inference.

MI400, which will be available next year, also looks to be at least on par with Nvidia's roadmap.

moondistance•7mo ago
(this is also why AMD popped 10% at open yesterday - this is a new development and talks from their 2025 "Advancing AI" event were published late last week + over the weekend)
christkv•7mo ago
Is the software stack still lacking?
moondistance•7mo ago
Yes, big time, but there continues to be lots of progress.

Most importantly, models are maturing, and this means less custom optimization is required.

martinald•7mo ago
Yes I'd agree with that. There is so much demand for inference which is maturing rapidly that even if a lot of the "R&D" is done on NVidia cards because of their (vastly, let's be fair) software stack, if AMD is competitive on the inference side (and perhaps more importantly have shorter lead times) then doing the inference on AMD is still an enormous market.

I suspect we will (or already are?) at a point where 95%+ of GPUs are used for inference, not training.

OneDeuxTriSeiGo•7mo ago
Yeah it's still a few years behind but it's getting better. They are hiring software and tooling engineers like crazy. I keep tabs on some of the job slots companies have in our area and every time I check AMD they always have tons of new slots for software, firmware, and tooling (and this has been the case for ~3 years now).

They've been playing catch up after "the bad old days" when they had to let a bunch of people go to avoid going under but it looks like they are catching back up to speed. Now it's just a matter of giving all those new engineers a few years to get their software world in order.

storus•7mo ago
They pay hardware rates to software engineers (principal engineer at the salary level of a decent fresh graduate) so I won't be too optimistic about them attracting software people that would propel them forward.
OneDeuxTriSeiGo•7mo ago
At least where I live (very much not west coast), their SW and HW rates are at or above what we normally see in this area.
latchkey•7mo ago
Stock is undervalued. If you get in now and it pops over the next few years, it'll likely make up for lower compensation.
MegaButts•7mo ago
You don't need to work at AMD to buy their stock.
latchkey•7mo ago
True, but if you don’t have a job, where’s the money for buying stock coming from?
alemanek•7mo ago
If you are what AMD needs to catch up then you can just go work for NVidia for 3x the pay. This market sucks but top tier engineers in the niche they need are not a dime a dozen.
latchkey•7mo ago
It isn't always about the money.
MegaButts•7mo ago
Then why is your original comment about compensation?
latchkey•7mo ago
What I said was: “it’ll likely make up for lower compensation.”

The point is, someone might join AMD because they believe in the mission, not just for the paycheck. I followed that with: “It isn’t always about the money,” which is consistent with my original comment.

The real subtext is something I care deeply about: Nvidia is a monopoly. If AI is truly a transformative technology, we can’t rely on a single company for all the hardware and software. Viable alternatives are essential. I believe in this vision so strongly that I started a company to give developers access to enterprise grade AMD compute, back when no one was taking AMD seriously in AI. (Queue the HN troll saying that nobody still does.)

If the stock goes up while they’re there, great, that’s a bonus.

MegaButts•7mo ago
[flagged]
latchkey•7mo ago
[flagged]
MegaButts•7mo ago
[flagged]
latchkey•7mo ago
[flagged]
MegaButts•7mo ago
[flagged]
latchkey•7mo ago
[flagged]
MegaButts•7mo ago
[flagged]
latchkey•7mo ago
[flagged]
MegaButts•7mo ago
[flagged]
latchkey•7mo ago
[flagged]
MegaButts•7mo ago
[flagged]
latchkey•7mo ago
[flagged]
MegaButts•7mo ago
[flagged]
latchkey•7mo ago
[flagged]
MegaButts•7mo ago
I won't lie, your projection is pretty funny.
latchkey•7mo ago
¯\_(ツ)_/¯
dang•7mo ago
This looks to be about the point where this exchange turned into a tit-for-spat...a really bad one. This is not what HN is for, so please avoid this in the future. I know it isn't always easy to do that, but you both violated the site rules really badly here.

https://news.ycombinator.com/newsguidelines.html

dang•7mo ago
This looks to be about the point where this exchange turned into a tit-for-spat...a really bad one. This is not what HN is for, so please avoid this in the future. I know it isn't always easy to do that, but you both violated the site rules really badly here.

https://news.ycombinator.com/newsguidelines.html

iszomer•7mo ago
We're forbidden to trading our own stock anyway, SEC regulation on insider trading and all.
vlovich123•7mo ago
You’re forbidden from shorting. Buying is completely allowed unless you are classified an insider and even then trades are open for I believe a month after quarterly results.
almostgotcaught•7mo ago
You're "talking your book".
zombiwoof•7mo ago
They pay terrible and still have legacy old guard managers. If you try to innovate on software you should look elsewhere or really make sure your manager knows what’s what
martinald•7mo ago
FWIW for the first time in 2+ years I managed to compile llama.cpp with ROCm out of the box and run a model with no problems* on Linux (actually under WSL2 as well), with no weirdness or errors.

Every time I have tried this previously it has failed with some cryptic errors.

So from this very small test it has got way better recently.

*Did have problems enabling the WMMA extensions though. So not perfect yet.

halJordan•7mo ago
If this has been an issue for two years, then it's not rocm or llama.cpp problem.
martinald•7mo ago
Oh I'm sure you are right its operator error, but I'd always have some issue installing rocm and getting the paths right or something. This is the first time I've managed to install rocm following the commands exactly and then compile llama.cpp without having to adjust anything.

BTW, this kind of dev experience does really matter. I'm sure it was possible to get working previously; but I didn't have the level of interest to make it work - even if it was somewhat trivial. Being able to compile out of the box makes a big difference. And AFIAK this new version is the first to properly support WSL2, which means I don't have to dual boot to even try and get it working. It's a big improvement.

vlovich123•7mo ago
You can blame the user for not using the tools correctly or the manufacturer for making difficult to use tools that aren’t straightforward or don’t work in various non happy path conditions (ie unreliable installers).

For example, to this day installing MSVC doesn’t make a default sane compiler available in a terminal - you have to open their shortcut that sets up environment variables and you have to just know this is how MSVC works. Is this a user problem or Microsoft failing to follow same conventions ever other toolchain installer follows?

latchkey•7mo ago
https://eliovp.com/cranking-out-faster-tokens-for-fewer-doll...
jauntywundrkind•7mo ago
Faster small matrix, for AI. Yup, that seems like good fit for what folks want.

Supercharging the Local Data Share (LDS) that's shared by threads is really cool to hear about. 64 -> 160KB size. Writes into LDS go from 32B max to 128B, increasing throughout. Transposes, to help get the data in the right shape for its next use.

Really really curious to see what the UDNA unified next gen architectures look like, if they really stick to merging Compute and Radeon CDNA and RDNA, as promised. If consumers end up getting multi-die compute solutions that would be neat & also intimidatingly hard (lots of energy spent keeping bits in sync across cores/coherency). After Navi 4X ended up having its flagship cancelled way back now, been wondering. I sort of expect that this won't scale as nicely as Epyc being a bunch of Ryzen dies. https://wccftech.com/amd-enthusiast-radeon-rx-8000-gpus-alle...

icf80•7mo ago
no UDNA ? any news ?
incomingpain•7mo ago
I bought a radeon 9060. ROCM works. I'm getting ~40 tokens/sec out of Phi4: 14B

BEWARE: I was running fully patched ubuntu 24 LTS and I needed to upgrade to ubuntu 24.10 and then ubuntu 25 before the drivers worked. Painful.