frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
521•klaussilveira•9h ago•146 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
855•xnx•14h ago•515 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
68•matheusalmeida•1d ago•13 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
176•isitcontent•9h ago•21 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
177•dmpetrov•9h ago•78 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
288•vecti•11h ago•130 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
67•quibono•4d ago•11 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
342•aktau•15h ago•167 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
336•ostacke•15h ago•90 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
236•eljojo•12h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
431•todsacerdoti•17h ago•224 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
6•videotopia•3d ago•0 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
40•kmm•4d ago•3 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
369•lstoll•15h ago•252 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
12•romes•4d ago•1 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
218•i5heu•12h ago•162 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
87•SerCe•5h ago•74 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
17•gmays•4h ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
38•gfortaine•7h ago•10 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
162•limoce•3d ago•81 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
60•phreda4•8h ago•11 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
126•vmatsiiako•14h ago•51 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
261•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1027•cdrnsf•18h ago•428 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
54•rescrv•17h ago•18 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
16•denysonique•5h ago•2 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
106•ray__•6h ago•51 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
44•lebovic•1d ago•14 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
83•antves•1d ago•60 comments
Open in hackernews

AMD's Chiplet APU: An Overview of Strix Halo

https://chipsandcheese.com/p/amds-chiplet-apu-an-overview-of-strix
207•zdw•3mo ago

Comments

oDot•3mo ago
I read somewhere, but can't remember where, that a major reason those APUs aren't as efficient as the Apple ones is a conscious decision to share the architecture with Epyc and therefore accept worse efficiency at lower wattage as a tradeoff.

Can someone confirm/refute that?

christkv•3mo ago
They are ok but yeah they do not have anything like the memory bandwidth of an m3 ultra. But they also cost a lot less. I’m primarily looking to replace my older desktop but just have to make sure i can run an external gpu like the A6000 that i can borrow from work without having to spend a week fiddling with settings or parameters
jeswin•3mo ago
In this review, Hardware Canucks tested [1] the M4 Pro (3nm 2nd gen) and the 395+ (4nm) at 50w and found the performance being somewhat comparable. The differences can be explained away by 3nm vs 4nm.

[1]: https://www.youtube.com/watch?v=v7HUud7IvAo

aurareturn•3mo ago
It isn’t comparable at all. In MT, maybe it is comparable with the M4 Pro still winning. In ST, it is 3-4x ahead of Strix Halo in efficiency.
kllrnohj•3mo ago
> In ST, it is 3-4x ahead of Strix Halo in efficiency.

The Hardware Canucks video didn't seem to do any such investigation, where did you get that number from?

AnthonyMouse•3mo ago
It seems like a comparison of the battery life under light loads (accounting for the vast majority of the difference) multiplied by some unspecified single thread performance benchmark? But under light loads laptop battery life is dominated by things like the screen rather than the CPU, and on top of that the Macbook has a larger battery.

Meanwhile under the heavy loads that actually tax the processor the M4 somehow has worse battery life even with the larger battery and a nominally lower TDP.

Is the infamous efficiency not the processor at all and they're just winning on the basis of choosing more efficient displays and wireless chips?

jamiek88•3mo ago
How the heck is a 3.6x faster single thread M4 Pro 'comparable'? Which by the way you can buy in a $600 prebuilt not $2500 if you can even find this unobtanium chip.
kllrnohj•3mo ago
where are you seeing 3.6x faster single thread performance???
tredre3•3mo ago
Geekbench 6 scores vary based on cooling but the 395+ hangs out around 3000 and the M4 Pro around 3600, how is that 3.6x?
canucker2016•3mo ago
Where can you buy a $600 mac with an M4 Pro?

The M4 mac mini at $599 comes with 16GB RAM and 256GB SSD - see https://www.apple.com/shop/buy-mac/mac-mini/apple-m4-chip-wi...

The M4 Pro mac mini starts at $1399 with 24GB RAM and 512GB SSD - see https://www.apple.com/shop/buy-mac/mac-mini/apple-m4-pro-chi...

christkv•3mo ago
I love the concept of it and have been thinking about getting one the only problem I see right now is no ability as far as I can see to get an external dock to run an additional external gpu in the future.
izacus•3mo ago
I'm not sure what do you mean? I'm running eGPU with my Strix Point laptop via Thunderbolt.

I've also seen quite a few mini PCs with Oculink port and Strix Halo CPUs.

christkv•3mo ago
I was reading people having problems getting the external cards working if they had a lot of memory?
Maxious•3mo ago
> In a Linux context I got some GPUs working and I can add some [external] GPU devices. Minis forum when I reached out to them said they don't officially support either via the Thunderbolt compatibility USB 4, USB4 v2 or even the built-in PCIe slot. Yeah, not technically officially supported and it's because of the resource allocation and the BAR space and they need somebody on the BIOS team to understand that to fix it.

https://www.youtube.com/watch?v=TvNYpyA1ZGk

There are ways to manage BAR better in linux or with UEFI preboot environments for windows as hobbyists have been doing for ages due to bad BIOS support https://github.com/xCuri0/ReBarUEFI

InTheArena•3mo ago
Thanks for the pointer. I have been struggling to get either a oculink or USB4 PCIe tunnel to work with the framework desktop. HOpefully some clues here.
rcarmo•3mo ago
There are plenty of mini-PCs with USB4 and Oculink, and you can get an M.2 adapter (might be tricky to retrofit into a laptop though).
TulliusCicero•3mo ago
So potentially competitive with a 5070M for graphics? Sounds very nice, as long as price and power draw are reasonable.
makeitdouble•3mo ago
Power draw is around 75W. It can be manual boosted, but will stay below 100W under all circumstances (from memory, as I was researching the Z13)

The chip itself should accept higher power draws, and ASUS usually isn't shy on feeding 130+W to a laptop, so the 75W figure was quite a surprise to me.

ivape•3mo ago
I was just thinking the other day that AMD can match Nvidia pound for pound on the raw hardware specs, and if they don’t just yet, they get pretty close. If AI is a bubble, then AMD should not catch up. If there isn’t a bubble, then there is no choice but to learn to use whatever is out there and AMD is truly set to be another trillion dollar company. The 10% stake OpenAI took is going to look like a Google buying YouTube moment in the long run.

And it’s worth noting, AMD has always matched up with Nvidia hardware wise for decades, plus or minus. They are an interesting company in that they took on both Nvidia and Intel, and is still continuing to do so.

chao-•3mo ago
Comparing this against mobile dGPUs and the (finally real) DGX Spark, this feels like a latent market segment that has not arrived at its final form. I don't know what delayed the DGX Spark so long, but it granted AMD a huge boon by allowing them capture some market mindshare first.

Compared to discrete GPUs (mobile or not), the advantage of a dGPU is memory bandwidth. The disadvantage of a dGPU is power draw and memory capacity—if we set aside CUDA, which I grant is a HUGE thing to just "set aside".

If we mix in the small DGX Spark desktops, then those have an additional advantage in the dual 200Gb network ports that allow for RDMA across multiple boxes. One could get more from of a small stack (2, 3 or 4) of those than from the same number of Strix Halo 395 boxes. However, as sexy as my homelab-brain finds a small stack of DGX Spark boxes with RDMA, I would think that for professional use, I would rather have a GPU server (or Threadripper GPU workstation) than four DGX Spark boxes?

Because the DGX Spark isn't being sold in a laptop (AFAIK, CMIIW), that is another differentiator in favor of the Strix Halo. Once again, it points to this being a weird, emerging market segment, and I expect the next generation or two will iterate towards how these capabilities really ought to be packaged.

wffurr•3mo ago
“dGPU” usually means “discrete GPU”. Do you mean “iGPU” for “integrated GPU” instead?

Strix Halo is also being marketed for gaming but the performance profile is all wrong for that. The CPU is too fast and the iGPU still not strong enough.

I am sure it’s amazing at matmul though.

chao-•3mo ago
Yes, I intended to use the term "discrete GPU" before using "dGPU" as a shorthand for that exact reason (in the second paragraph). I now see that I edited the first paragraph to use "dGPU" without first defining it as such.

I also agree that they aren't for gaming (something I know little about). My comment was with respect to compute workloads, but I never specified that. Apologies.

speed_spread•3mo ago
As a casual gamer I'm already okay with the RTX 3050 dGPU on my laptop. Reports put Strix Halo at RTX 4070 level which is massive for an iGPU and certainly allows for 2k single screen gaming. Hardcore gaming will always require a desktop with PCIe boards.
lostmsu•3mo ago
Strix Halo is nowhere near RTX 4070 (desktop at least, not familiar with laptop GPUs).
speed_spread•3mo ago
Maybe there's been some selective optimization and careful marketing but to even be in that ballpark for some games now means that more is coming.

https://www.techspot.com/news/106835-amd-ryzen-strix-halo-la...

lostmsu•3mo ago
This link is a terrible source. In one of the graphs 4060 is faster than 4070. This speaks to the quality of testing.
kimixa•3mo ago
In some power constrained scenarios that sort of thing is often petty reproducible.

Especially if the different SKUs have different power budgets. Laptop GPU naming and performance is a bit of a mess, as in the example shown (the 4060 on the Asus TUF Gaming A16 has a limit of 140w GPU+CPU, while the 4070 on the Asus Proart PX13 has 115w GPU+CPU - and even that is a "custom" non-default mode with 95w being the actual out-of-the-box limit).

With wildly varying power profiles laptop graphics need to be compared by chassis (and the cooling/power supply that implies) as much as by GPU SKU.

lostmsu•3mo ago
That just proves the point about the source, right?
yunohn•3mo ago
No, it disproves the misconception that GPUs with the same model number will perform the same on all devices.
AmVess•3mo ago
I have one. Framework Desktop mainboard that I put into a larger ITX chassis and regular power supply.

It's fine for 1440p gaming. I don't use it for that, but it would not be a bother if that was all I had.

wffurr•3mo ago
It’s fine for 1440p gaming but you way overpaid for the GPU versus a discrete GPU and a lower end CPU with socketed RAM.

The CPU power of it and the high bandwidth integrated RAM aren’t the right performance trade offs for a gaming workload. Does it work for it? Sure. But you also have a bunch of extra hardware you don’t really need for it.

Tepix•3mo ago
I bought it for AI, that fact that i can also use it for gaming is a nice bonus (same with the RTX 3090 previously).
dismalaf•3mo ago
From what I've seen the gaming benchmarks are fantastic. Beats the mobile 5070 for some games and settings, or slightly behind on others. While being very far ahead of every other iGPU.

I have a laptop with an Nvidia GPU. Ruins battery life and makes it run very hot. I'd pay a lot for a powerful iGPU.

ekianjo•3mo ago
I have a framework Desktop and it is a fine machine for gaming as well. It won't beat discrete GPUs but you can run Cyberpunk 2077 at max settings at 1080p and still be above 60fps.

Edit: it does feel like a rtx4060 performance wise so it's not far from some discrete GPUs.

justincormack•3mo ago
Fyi its not dual 200Gb its 1x 200 or 2x 100Gb
justinclift•3mo ago
How sure are you of that? :)

Everything I've seen says it's 2x 200GbE.

One of many examples: https://www.storagereview.com/review/nvidia-dgx-spark-review...

wmf•3mo ago
That review says "Allows for a maximum of 200G bandwidth" between the two ports.
justinclift•3mo ago
It literally says this:

> ConnectX-7 Smart NIC – 2x 200G QSFP

and:

> what makes this unit interesting is the dual 200 GbE QSFP56 interfaces driven by an integrated NVIDIA ConnectX-7 SmartNIC.

---

Let's try a manufacturer's page then for confirmation:

https://www.dell.com/en-us/shop/desktop-computers/dell-pro-m...

In the parts labelling diagram, it has this:

> ConnectX-7 Smart NIC (2x 200G QSFP ...

---

That being said, the Storage Review one does point out PCI-E bandwidth being a limiter anyway:

> At first glance, you might deduce that the Spark allows for 400G of connectivity; unfortunately, due to PCIe limitations, the Spark is only able to provide 200G of connectivity.

linuxftw•3mo ago
The DGX Spark seems to have one intended usecase: local AI model development and testing. The Strix Halo is an amd64 with iGPU, it can be used for any traditional PC workload, and is a reasonable local-ai target device.

For me, the Strix Halo is the first nail in the coffin of discrete GPUs inside laptops for amd64. I think Nvidia knows this, which is why they're partnering with Intel to make an iGPU setup.

InTheArena•3mo ago
I think it's beyond that even - it's for local AI toolchain model development and testing or those people who have a ore-exisitng nvidia deployment infrastructure

It feels like nVidia spent a ton of money here on a piece of infrastructure (the big network pipes) that very few people will ever leverage, and that the rest of the infrastructure constrains somewhat.

Tuna-Fish•3mo ago
Next gen, AMD has the Medusa Halo with (reportedly) a 384bit LPDDR6 bus. This should get you twice the memory of what Strix Halo has with 1.7 times the throughput when using memory that's already announced, with even better modules coming later.

I think with the success of Strix Halo as an inference platform, this market segment is here to stay.

karmakaze•3mo ago
I'm really excited and looking forward to this refresh. The APU spec leaks for the upcoming PS6 and XBox have some clues as well. My wishlist: more memory bandwidth, more GPU/NPU cores, actual unified memory rather than designating, more PCIe lanes. Of course there could be more/new AMD packaging magic sprinkled in too.
Tepix•3mo ago
I really hope they go beyond 128GB with Medusa Halo. 384bit and LPDDR6 sounds promising. The Strix Halo is already pretty sweet as-is.
makeitdouble•3mo ago
The saddest part of this is the lack of availability: at this point there's 2 standard laptops using this chip, the Z13 being the only high perf one. There's the Framework lines as well, but they aren't available in many countries, and it's a very specific public.

And that's after half a year after the first machines to come to the market.

I love the Z13, but it's clearly a niche machine, so I'm assuming they are having a really hard time manufacturing the chips ? All the capacity is getting eaten by Apple ?

voidmain0001•3mo ago
HP ZBook Ultra G1a is a great option and can be bought with up to 128GB RAM.
makeitdouble•3mo ago
Yes. It looks to be the more standard option form factor wise, which is a blessing and a curse.

For instance they went for the standard lower resolution display (1920x1440 for 14" vs 2560x1600 for 13" on the Z13). The thermals also looked better on the Z13, which comes partly with the form factor, partly with Asus optimizing for that for so many years.

Of cours the Z13 keyboard is meh, I expect most owners to have it detached 90% of the time and handle the machine more like a standalone screen/touch/pen input.

voidmain0001•3mo ago
The upper tier G1a comes with an OLED 2.8K touch display.

The vapor chamber cooling on the HP seems efficient, but the back side venting on the flow is clearly better.

https://h20195.www2.hp.com/v2/getpdf.aspx/c09119722.pdf

makeitdouble•3mo ago
Sounds like it's not available in Japan, or I'm not good enough with HP's site to find the customizable options..but it's good to know!
ThreatSystems•3mo ago
Cognisant US pricing for the HP Z Book Ultra was astronomical, within the EU it's on par with standard laptops and to good effect. The only regret I have is ordering on release day and not wanting to wait for the 128gb version; but battery life and power has remain unmatched to any of the pretty large workloads I have thrown at it!

Outside of laptops, Beelink and co. are making NUCs with them which are relatively affordable!

I do agree, the scarcity has limited their opportunity to assess the growth opportunity.

green7ea•3mo ago
I also have one with 64gb — best laptop I've ever used :-). I have the same regret of not waiting for the 128gb version to be available before buying.
wmf•3mo ago
Beelink, GMKtec, Minisforum, Corsair...
dontlaugh•3mo ago
You can’t even buy the Z13 with more than 32 GB in most of Europe and certainly not with the 2-3 years of warranty most employers require for hardware they purchase.

I’m annoyed that I’ll probably have to pick a Framework 13 with less CPU and much less GPU merely because of availability.

mumber_typhoon•3mo ago
I wonder if higher TDP is possible with framework desktop. That one probably has much better cooling than these laptops with the same chip and if numbers are different.
AmVess•3mo ago
I haven't tested the power draw, but I have the mainboard from Framework that I put into a larger ITX case for better cooling.

My main PC is a 7950X3D which has the same core count/threads as the Strix unit, and the Strix benches within margin of error as the 7950X3D. Which is to say the performance is the same.

That you can get the same computer power in a laptop is crazy.

spatular•3mo ago
Yes, 140W sustained, 160W burst (~10 seconds).
rcarmo•3mo ago
I would love to try out one of the mini-PCs that ship this, but they seem to be made of either platinum (hugely overpriced in EU) or unobtainium (no retailers carry them here, and getting something direct from China is dicey warranty-wise). ROCm 7 looks to be working already under most Linux distros and having this as a workstation with a local LLM or a “home inference server” with Ollama and a few services seems like a great solution.
dangus•3mo ago
ROCm is making great progress but I’ve had enough hiccups (desktop with RX9070XT) that I’d still recommend those looking for AI capability to continue using an Nvidia or Apple solution for the time being.

Still, I think it’ll be quite equivalent soon.

I think one of the best AI systems in terms of price/performance is still just to build a desktop with dual RTX 3090’s (of course you’ll need an board that supports dual cards) and toss it in a closet.

Tuna-Fish•3mo ago
It depends on what you are doing. A lot of people who want to do local inference want to do it using much larger models than what can be fit onto a RTX3090, and Strix Halo is such a hit because it gives you reasonable (not great, but good enough to not be outright painful) performance with 128GB of memory.
geerlingguy•3mo ago
Also, Vulkan is great, and much more stable. Plus tends to work great for new, and even very old, graphics cards.
dismalaf•3mo ago
At this point Vulkan will just take over. AMD and Intel are fumbling ROCm and SYCL, whereas Vulkan already ships nearly everywhere.
almostgotcaught•3mo ago
> ROCm is making great progress

is the progress in the room with us?

wmf•3mo ago
Yes? For example, ROCm on MI355X is working fine a month after release; it didn't take a year.
cpburns2009•3mo ago
Have you looked at Corsair's AI Workstation 300 Desktop PC? [1] It's 2000-2700 EURO depending on model, and taking VAT into consideration it's comparable to the 1700-2300 USD pretax prices.

[1]: https://www.corsair.com/eu/en/c/ai-workstations

rcarmo•3mo ago
No, but it falls into the platinum side of the equation. I can rent a cloud GPU for a few hours a month and come out ahead.
overfeed•3mo ago
If the economics don't work out, perhaps this product is not for you and you're better off renting.
adgjlsfhk1•3mo ago
I don't think there's any computer hardware that is now economical to buy and use a couple hours a month than to rent
hau•3mo ago
I generally agree with gp. Checkout with your link says "This item is currently on pre-order" btw. Retail mini-pcs are somehow harder to obtain than general purpose ones.
tonyhart7•3mo ago
probably because chip itself is shipped somewhere with "bigger margin" product

mini pc with that much compute power is only enthusiast home lab mainly audience

any enterprise or regular homelab wouldn't even need it hence why its hard to have it available

porphyra•3mo ago
Seems about the same price as the Minisforum MS-S1 Max and Framework Desktop in that case.
mandelken•3mo ago
I ordered the framework desktop 395 - 128Gb edition for just under 1900 eur. With some extras I paid just over 2000 incl shipping to EU. Didn’t feel overpriced to me.
erinnh•3mo ago
I looked just now and it cost 2500 euro without any storage.

Was it on sale or something?

mandelken•3mo ago
Huh, indeed, above 2300 eur now. I made a deposit earlier this year and it shipped in August, didn’t see the price increased.
hereonout2•3mo ago
This one seems relatively cheap and ships from Germany

https://www.bosgamepc.com/products/bosgame-m5-ai-mini-deskto...

suprjami•3mo ago
iiuc the high price is mostly from the high bandwidth memory. (which isn't actually that high bandwidth compared to actual GPUs)
nexle•3mo ago
High Yield has a video that deep dive into the 395 chip on the silicon level: https://youtu.be/maH6KZ0YkXU
InTheArena•3mo ago
I picked up a framework desktop and am running it through it's paces right now. So far, it's a impressive little box. I'm really hopeful that this continues to drive more and more enthusiast support and engagement. Getting strong vulcan or rocm supported infrastructure would be great for everyone.
runjake•3mo ago
Related question: Can I buy a desktop Zen 5 CPU and something like an RX 7600 XT and some RAM and have a high shared memory bandwidth situation between the system memory and the GPU ala Strix Halo and Apple Silicon without spending a ton of money?

And get pretty reasonable local LLM performance on some of the larger models for hobbyist use?

Edit: I don’t have a good grasp on this but I’m thinking I can only do shared memory when I’m using an APU and not a discrete GPU. Is this correct?

Rohansi•3mo ago
No, memory is not "unified" when you have a physically separate GPU. In that case memory is accessed through the PCIe bus which will be a significant bandwidth bottleneck. PCIe tops out at 64GB/s for 16 lanes of PCIe 5 and not all GPUs support that.
dzonga•3mo ago
how does the gpu compare though to the ones in m-series macs ?