frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Claude 4 System Card

https://simonwillison.net/2025/May/25/claude-4-system-card/
68•pvg•2h ago•21 comments

You probably don't need a dependency injection framework

http://rednafi.com/go/di_frameworks_bleh/
29•ingve•1h ago•24 comments

Reinvent the Wheel

https://endler.dev/2025/reinvent-the-wheel/
387•zdw•12h ago•160 comments

On File Formats

https://solhsa.com/oldernews2025.html#ON-FILE-FORMATS
53•ibobev•4d ago•34 comments

How to Install Windows NT 4 Server on Proxmox

https://blog.pipetogrep.org/2025/05/23/how-to-install-windows-nt-4-server-on-proxmox/
83•thepipetogrep•7h ago•26 comments

Google Shows Off Android XR Smart Glasses with In-Lens Display

https://www.macrumors.com/2025/05/20/google-android-xr-smart-glasses/
15•tosh•3d ago•12 comments

I used o3 to find a remote zeroday in the Linux SMB implementation

https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/
471•zielmicha•18h ago•132 comments

Why old games never die, but new ones do

https://pleromanonx86.wordpress.com/2025/05/06/why-old-games-never-die-but-new-ones-do/
141•airhangerf15•11h ago•128 comments

Space is not a wall: toward a less architectural level design

https://www.blog.radiator.debacle.us/2025/05/space-is-not-wall-toward-less.html
18•PaulHoule•3d ago•2 comments

Tachy0n: The Last 0day Jailbreak

https://blog.siguza.net/tachy0n/
204•todsacerdoti•13h ago•28 comments

The WinRAR Approach

https://basicappleguy.com/basicappleblog/the-winrar-approach
57•frizlab•4d ago•35 comments

Good Writing

https://paulgraham.com/goodwriting.html
223•oli5679•17h ago•232 comments

Nvidia Pushes Further into Cloud with GPU Marketplace

https://www.wsj.com/articles/nvidia-pushes-further-into-cloud-with-gpu-marketplace-4fba6bdd
62•Bostonian•3d ago•39 comments

Infinite Tool Use

https://snimu.github.io/2025/05/23/infinite-tool-use.html
4•tosh•1h ago•0 comments

Show HN: Rotary Phone Dial Linux Kernel Driver

https://gitlab.com/sephalon/rotary_dial_kmod
304•sephalon•19h ago•43 comments

The Xenon Death Flash: How a Camera Nearly Killed the Raspberry Pi 2

https://magnus919.com/2025/05/the-xenon-death-flash-how-a-camera-nearly-killed-the-raspberry-pi-2/
201•DamonHD•20h ago•75 comments

Hong Kong's Famous Bamboo Scaffolding Hangs on (For Now)

https://www.nytimes.com/2025/05/24/world/asia/hongkong-bamboo-scaffolding.html
174•perihelions•20h ago•51 comments

Using the Apple ][+ with the RetroTink-5X

https://nicole.express/2025/apple-ii-more-like-apple-5x.html
36•zdw•11h ago•9 comments

Peer Programming with LLMs, for Senior+ Engineers

https://pmbanugo.me/blog/peer-programming-with-llms
137•pmbanugo•19h ago•61 comments

Lone coder cracks 50-year puzzle to find Boggle's top-scoring board

https://www.ft.com/content/0ab64ced-1ed1-466d-acd3-78510d10c3a1
141•DavidSJ•14h ago•27 comments

An Almost Pointless Exercise in GPU Optimization

https://blog.speechmatics.com/pointless-gpu-optimization-exercise
46•atomlib•4d ago•2 comments

Scientific conferences are leaving the US amid border fears

https://www.nature.com/articles/d41586-025-01636-5
303•mdhb•11h ago•186 comments

The Logistics of Road War in the Wasteland

https://acoup.blog/2025/05/23/collections-the-logistics-of-road-war-in-the-wasteland/
67•ecliptik•12h ago•27 comments

It is time to stop teaching frequentism to non-statisticians (2012)

https://arxiv.org/abs/1201.2590
65•Tomte•15h ago•56 comments

Domain Theory Lecture Notes

https://liamoc.net/forest/dt-001Y/index.xml
28•todsacerdoti•8h ago•3 comments

Contacts let you see in the dark with your eyes closed

https://scitechdaily.com/from-sci-fi-to-superpower-these-contacts-let-you-see-in-the-dark-with-your-eyes-closed/
40•geox•2d ago•7 comments

Exposed Industrial Control Systems and Honeypots in the Wild [pdf]

https://gsmaragd.github.io/publications/EuroSP2025-ICS/EuroSP2025-ICS.pdf
47•gnabgib•14h ago•0 comments

AI, Heidegger, and Evangelion

https://fakepixels.substack.com/p/ai-heidegger-and-evangelion
133•jger15•18h ago•70 comments

Personal Computer Origins: The Datapoint 2200

https://thechipletter.substack.com/p/personal-computer-origins-the-datapoint
18•rbanffy•3d ago•1 comments

Microsoft-backed UK tech unicorn Builder.ai collapses into insolvency

https://www.ft.com/content/9fdb4e2b-93ea-436d-92e5-fa76ee786caa
113•louthy•20h ago•84 comments
Open in hackernews

Nvidia Pushes Further into Cloud with GPU Marketplace

https://www.wsj.com/articles/nvidia-pushes-further-into-cloud-with-gpu-marketplace-4fba6bdd
62•Bostonian•3d ago

Comments

Bostonian•3d ago
https://archive.is/cnYO8
snihalani•3h ago
ty
justahuman74•4h ago
I can't see the cloud providers being happy about this, it whitelabels away their branding and customer experience flows.

It puts nvidia on both the vendor and customer side of the relationship, which seems odd

seydor•4h ago
google make their own chips too
londons_explore•4h ago
TPU's have serious compatibility problems with a good chunk of the ML ecosystem.

That alone means many users will want to use Nvidia hardware even at a decent price premium when the alternative is an extra few months of engineering time in a very fast moving market.

jszymborski•3h ago
I haven't worked with TPUs, but my understanding is that they are pretty plug-n-play for Google frameworks (JAX, TF) but is also pretty simple to use with PyTorch [0]. That covers nearly all of the marketshare

[0] https://docs.pytorch.org/xla/release/r2.7/learn/xla-overview...

ketzo•4h ago
Well, what are they gonna do about it?

Nvidia has the most desirable chips in the world, and their insane prices reflect that. Every hyperscaler is already massively incentivized to build their own chips, find some way to take Nvidia down a peg in the value chain.

Everyone in the world who can is already coming for Nvidia’s turf. No reason they can’t repay the favor.

And beyond just margin-taking, Nvidia’s true moat is the CUDA ecosystem. Given that, it’s hugely beneficial to them to make it as easy as possible for every developer in the world to build stuff on top of Nvidia chips — so they never even think about looking elsewhere.

Xevion•3h ago
While I don't dispute that they're objectively the most desirable at the current moment - I do think your comment implies that they deserve it, or that people WANT Nvidia to be the best.

It almost sounds like you're cheering on Nvidia, framing it as "everyone else trying to reduce the value of Nvidia", meanwhile they have a long, long history of closed-source drivers, proprietary & patented cost-inflated technology that would be identical if not inferior to alternatives - if it weren't for their market share and vendor lock-in strategies.

"Well, what are they gonna do about it?" When dealing with a bully, you go find friends. They're going to fund other chip manufacturers and push for diversity, fund better drivers and compatibility. That's the best possible future anyone could hope for.

almostgotcaught•2h ago
> that would be identical if not inferior to alternatives - if it weren't for their market share and vendor lock-in strategies.

1. "Identical if not for market share" is a complete contradiction when what we're talking about is the network effect of CUDA

2. What vendor lock in? What are you talking about? They have a software and compiler stack that works with their chips. How is that lock in, that's literally just their product offering. In fact the truth is you can compile CUDA for AMD (using hipify) and guess what - the result sucks because AMD isn't a comparable alternative!

Ygg2•1h ago
> In fact the truth is you can compile CUDA for AMD (using hipify)

You can compile x64 to ARM and performance tanks. Does this means ARM isn't a comparable alternative to x64?

It just means their software works badly with said architecture. Could be that AMD acceleration is horrible (but then the FSR would be worse) or it could be that it's just different, or the translation layer is bad.

Dr4kn•2h ago
I don't think this problem is going to be solved by hyper scalers offering their own accelerators. They probably offer better price to performance, but try to lock you into their ecosystem.

With the Nvidia solution you have at least another option. Vendor agnostic, but Nvidia lock in.

If most ML startups, one hyper scaler and at best also AMD, would go with one common backend, then it might get enough traction to become *the* standard.

_zoltan_•1h ago
Ugh, that's such a bad take. why wouldn't you cheer for NVIDIA? They had the discipline, the courage and the long term vision that nobody else did for the last 20 years.

Closed source?? Who cares? It's their own products. Vendor lock in? It's their own chips man. You wouldn't expect Nvidia to develop software for AMD chips would you? That would be insane. I would not do that.

Their tech is superior to everybody else's and Jensen keeps pulling rabbits out of a hat. I hope they keep going strong for the next decade.

Ygg2•1h ago
> why wouldn't you cheer for NVIDIA?

Because they are an amoral mass that seeks to make profit and has turned GPU market into a cluster fuck?

> Their tech is superior to everybody

Their only saving grace is CUDA, and DLSS, their hardware has been overvalued for quite some time.

winterbloom•26m ago
Couldn't you replace Nvidia with apple here

They have a nice software stack but the hardware is overvalued

chii•2h ago
> Nvidia’s true moat is the CUDA ecosystem.

it is true, but also not. nvidia is certainly producing a chip that nobody else can replicate (unless they're the likes of google, and even they are not interested in doing so).

The CUDA moat is the same type of moat as intel's x86 instruction set. Plenty of existing programs/software stack have been written against it, and the cost to migrate away is high. These LLM pipelines are similar, and even more costly to migrate away.

But because LLM is still immature right now (it's only been approx. 3 yrs!), there's still room to move the instruction set. And middleware libraries can help (pytorch, for example, has more than just the CUDA backend, even if they're a bit less mature).

The real moat that nvidia has is their hardware capability, and CUDA is the disguised moat.

tester756•1h ago
>it is true, but also not. nvidia is certainly producing a chip that nobody else can replicate

AMD, Intel?

_zoltan_•1h ago
NVLink says hello. Then rack scale NVLink says hello...

Nobody can touch it. Then that's just the hardware. The software is so much better on Nvidia. The width and breadth of their offering is great and nobody is even close.

tester756•1h ago
UALink?

>Ultra Accelerator Link (UALink) is an open specification for a die-to-die interconnect and serial bus between AI accelerators. It is co-developed by Alibaba, AMD, Apple, Astera Labs,[1] AWS, Cisco, Google, Hewlett Packard Enterprise, Intel, Meta, Microsoft and Synopsys.[2]

rvnx•1h ago
Google is doing the TPUs, isn’t it exactly for that ?
chii•1h ago
iirc, google's stuff is only for google, and they're not selling it as as something that others can buy.

I suppose this can change.

mattlondon•1h ago
Ask Google: TPUs.
alexgartrell•3h ago
The cloud business model is to use scale and customer ownership to crush hardware margins to dust. They’re also building their own accelerators to try to cut Nvidia out altogether.
cbg0•43m ago
I've always felt that the business model is nickel & diming for things like storage/bandwidth and locking in customers with value-add black box services that you can't easily replace with open source solutions.

Just took a random server: https://instances.vantage.sh/aws/ec2/m5d.8xlarge?duration=mo... - to get a decent price on it you need to commit to three years at $570 per month(no storage or bandwidth included). Over the course of 3 years that's $20520 for a server that's ~10K to buy outright, and even with colo costs over the same time frame you'll spend a lot less, so not exactly crushing those margins to dust.

xbmcuser•2h ago
This is what Nvidia has always done creeping into the margins of its partners and taking over. All it's gpu board partners will tell the same story.
neximo64•2h ago
Why? If the GPUs are used and they want more it makes it easy, also its opt-in.
AlotOfReading•4h ago
I can see the value of the product, but this seems like an incredibly dangerous offering for smaller clouds. Nvidia has significant leverage to drive prices down to commodity and keep any margin for themselves, while pushing most of the risk onto their partners.
alexgartrell•3h ago
I’d imagine that these clouds are probably being incentivized to participate
saagarjha•4h ago
So it's basically Vast.ai but for cloud providers?
londons_explore•4h ago
Isn't that rather stepping on the toes of your biggest clients - Microsoft, aws, gCloud, etc.
aranchelk•4h ago
Customers of those services have a lot of considerations, as long as Nvidia doesn’t undercut the prices too much, I think no.

Getting more developers creating more models that can then be run on those services will likely expand business for all of those vendors.

noosphr•3h ago
All those customers are also building their own chips.

Having been a partner for Microsoft research I've also had them try and patent the stuff we were providing them.

In short with megacorps the only winning move is to fuck them faster than they can fuck you.

mi_lk•2h ago
That’s a beautiful conclusion
zombiwoof•3h ago
Bye AMD
Xevion•3h ago
Dumb.

No cloud provider is gonna see further price gouging from the company with the largest market share and think "Yeah, let's disconnect from the only remaining competitor, make sure every nail is in our coffin".

It's probably the opposite. I bet this move will lead to AMD's increased funding towards compatability and TPU development, in the hopes that they'll become a serious competitor to Nvidia.

chii•2h ago
> AMD's increased funding towards compatability and TPU development

no investor is going to bet on the second-place horse. Because they would've done the betting _before_ nvidia became the winning powerhouse that it has become!

The fact is, AMD's hardware capability is just insufficient to compete, and they're not getting there fast enough - unlike the games industry, there's not a lot of low budget buyers here.

basilgohar•32m ago
AMD is hindered more by their software and network effects than raw hardware performance.
netfortius•3h ago
Isn't their CEO the guy who called Trump re-industrialisation policies 'visionary'? [1] Maybe that's where the idea on cloud (as all others, nowadays) is coming from?!? ;->

[1] https://www.reuters.com/business/aerospace-defense/nvidia-ce...

theincredulousk•2h ago
This has been developing for a while... The big players have basically been competing for allocations of a set production, so NVIDIA negotiated into the allocations that some % of the compute capacity they "sell" them is reserved and exclusively leased back to NVIDIA.

So now NVIDIA has a whole bunch of cloud infrastructure hosted by the usual suspects that they can use for the same type of business the usual suspects do.

well played tbh