frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Loose wire leads to blackout, contact with Francis Scott Key bridge

https://www.ntsb.gov:443/news/press-releases/Pages/NR20251118.aspx
201•DamnInteresting•4h ago•80 comments

Verifying your Matrix devices is becoming mandatory

https://element.io/blog/verifying-your-devices-is-becoming-mandatory-2/
11•LorenDB•31m ago•0 comments

Researchers discover security vulnerability in WhatsApp

https://www.univie.ac.at/en/news/detail/forscherinnen-entdecken-grosse-sicherheitsluecke-in-whatsapp
107•KingNoLimit•3h ago•23 comments

Europe is scaling back GDPR and relaxing AI laws

https://www.theverge.com/news/823750/european-union-ai-act-gdpr-changes
490•ksec•10h ago•504 comments

Workday to Acquire Pipedream

https://newsroom.workday.com/2025-11-19-Workday-Signs-Definitive-Agreement-to-Acquire-Pipedream
7•gaws•26m ago•1 comments

Building more with GPT-5.1-Codex-Max

https://openai.com/index/gpt-5-1-codex-max/
325•hansonw•6h ago•188 comments

Why CUDA translation wont unlock AMD

https://eliovp.com/why-cuda-translation-wont-unlock-amds-real-potential/
32•JonChesterfield•1w ago•18 comments

Meta Segment Anything Model 3

https://ai.meta.com/sam3/
221•lukeinator42•7h ago•46 comments

Jailbreaking AI Models to Phish Elderly Victims

https://simonlermen.substack.com/p/can-ai-models-be-jailbroken-to-phish
4•DalasNoin•21m ago•0 comments

How Slide Rules Work

https://amenzwa.github.io/stem/ComputingHistory/HowSlideRulesWork/
33•ColinWright•3h ago•6 comments

Roblox Requires Age Checks for Communication, Ushering in New Safety Standard

https://corp.roblox.com/newsroom/2025/11/roblox-requires-age-checks-limits-minor-and-adult-chat
25•urbanshaman•3h ago•18 comments

Static Web Hosting on the Intel N150: FreeBSD, SmartOS, NetBSD, OpenBSD and Linu

https://it-notes.dragas.net/2025/11/19/static-web-hosting-intel-n150-freebsd-smartos-netbsd-openb...
113•t-3•7h ago•37 comments

Cognitive and mental health correlates of short-form video use

https://psycnet.apa.org/fulltext/2026-89350-001.html
194•smartmic•4h ago•150 comments

Larry Summers resigns from OpenAI board

https://www.cnbc.com/2025/11/19/larry-summers-epstein-openai.html
223•koolba•11h ago•223 comments

Thunderbird adds native Microsoft Exchange email support

https://blog.thunderbird.net/2025/11/thunderbird-adds-native-microsoft-exchange-email-support/
312•babolivier•13h ago•87 comments

Racing karts on a Rust GPU kernel driver

https://www.collabora.com/news-and-blog/news-and-events/racing-karts-on-a-rust-gpu-kernel-driver....
36•mfilion•4h ago•3 comments

The patent office is about to make bad patents untouchable

https://www.eff.org/deeplinks/2025/11/patent-office-about-make-bad-patents-untouchable
170•iamnothere•2h ago•16 comments

Gaming on Linux has never been more approachable

https://www.theverge.com/tech/823337/switching-linux-gaming-desktop-cachyos
202•throwaway270925•3h ago•148 comments

Launch HN: Mosaic (YC W25) – Agentic Video Editing

https://mosaic.so
104•adishj•9h ago•96 comments

Vortex: An extensible, state of the art columnar file format

https://github.com/vortex-data/vortex
31•tanelpoder•4d ago•4 comments

Linux Career Opportunities in 2025: Skills in High Demand

https://www.linuxcareers.com/resources/blog/2025/11/linux-career-opportunities-in-2025-skills-in-...
6•dxs•51m ago•3 comments

The Death of Arduino?

https://www.linkedin.com/posts/adafruit_opensource-privacy-techpolicy-activity-739690336223705497...
352•ChuckMcM•5h ago•180 comments

How to identify a prime number without a computer

https://www.scientificamerican.com/article/how-to-identify-a-prime-number-without-a-computer/
38•beardyw•1w ago•25 comments

Pozsar's Bretton Woods III: The Framework

https://philippdubach.com/2025/10/25/pozsars-bretton-woods-iii-the-framework-1/2/
39•7777777phil•5h ago•16 comments

A $1k AWS mistake

https://www.geocod.io/code-and-coordinates/2025-11-18-the-1000-aws-mistake/
273•thecodemonkey•14h ago•234 comments

Branching with or Without PII: The Future of Environments

https://neon.com/blog/branching-environments-anonymized-pii
5•emschwartz•1w ago•3 comments

Tailscale Down

https://status.tailscale.com/incidents/01KAF1H8V7EGFKVG5KGZBB2RJC
56•fasz•2h ago•31 comments

What Killed Perl?

https://entropicthoughts.com/what-killed-perl
135•speckx•14h ago•316 comments

Exploring the limits of large language models as quant traders

https://nof1.ai/blog/TechPost1
109•rzk•17h ago•87 comments

The Future of Programming (2013) [video]

https://www.youtube.com/watch?v=8pTEmbeENF4
146•jackdoe•6d ago•91 comments
Open in hackernews

Why CUDA translation wont unlock AMD

https://eliovp.com/why-cuda-translation-wont-unlock-amds-real-potential/
32•JonChesterfield•1w ago

Comments

pixelpoet•49m ago
Actual article title says "won't"; wont is a word meaning habit or proclivity.
InvisGhost•28m ago
In situations like this, I try to focus on whether the other person understood what was being communicated rather than splitting hairs. In this case, I don't think anyone would be confused.
measurablefunc•47m ago
Why can't it be done w/ AI? Why does it need to be maintained w/ manual programming? Take the ROCm specification, take your CUDA codebase, let one of the agentic AIs translate it all into ROCm or the AMD equivalent.
jsheard•42m ago
The article is literally about how rote translation of CUDA code to AMD hardware will always give sub-par performance. Even if you wrangled an AI into doing the grunt work for you, porting heavily-NV-tuned code to not-NV-hardware would still be losing strategy.
measurablefunc•20m ago
The point of AI is that it is not a rote translation & 1:1 mapping.
cbarrick•41m ago
Has this been done successfully at scale?

There's a lot of handwaving in this "just use AI" approach. You have to figure out a way to guarantee correctness.

measurablefunc•19m ago
There are tons of test suites so if the tests pass then that provides a reasonable guarantee of correctness. Although it would be nice if there was also proof of correctness for the compilation from CUDA to AMD.
bigyabai•29m ago
Because it doesn't work like that. TFA is an explanation of how GPU architecture dictates the featureset that is feasibly attainable at runtime. Throwing more software at the problem would not enable direct competition with CUDA.
measurablefunc•17m ago
I am assuming that is all part of the specification that the agentic AI is working with & since AGI is right around the corner I think this is a simple enough problem that can be solved with AI.
Blackthorn•16m ago
I don't know why you're being downvoted because even if you're Not Even Wrong, that's exactly the sort of thing that has been endlessly presented by people trying to sell AI as something that AI will absolutely do for us.
measurablefunc•10m ago
Let's see who else manages to catch on to the real point I'm making.
j16sdiz•57s ago
The same as "Why just outsourcing it to <some country >"

AI aint magic.

You need more effort to manage, test and validate that.

outside1234•41m ago
Are the hyperscalers really using CUDA? This is what really matters. We know Google isn't. Are AWS and Azure for their hosting of OpenAI models et al?
bigyabai•24m ago
> We know Google isn't.

Google isn't internally, so far as we know. Google's hyperscaler products have long offered CUDA options, since the demand isn't limited to AI/tensor applications that cannibalize TPU's value prop: https://cloud.google.com/nvidia

lvl155•25m ago
Let’s just say what it is: devs are too constrained to jump ship right now. It’s a massive land grab and you are not going to spend time tinkering with CUDA alternatives when even a six-month delay can basically kill your company/organization. Google and Apple are two companies with enough resources to do it. Google isn’t because they’re keeping it proprietary to their cloud. Apple still have their heads stuck in sand barely capable of fixing Siri.
kj4ips•24m ago
I agree pretty strongly. A translation layer like this is making an intentional trade: Giving up performance and HW alignment for less lead time and effort to make a proper port.
martinald•8m ago
Perhaps I'm misunderstanding the market dynamics; but isn't AMDs real opp inference over research?

Training etc still happens on NVDA but inference is somewhat easy to do on vLLM et al with a true ROCm backend with little effort?

mandevil•5m ago
Yeah, ROCm focused code will always beat generic code compiled down. But this is a really difficult game to win.

For example, Deepseek R-1 released optimized for running on Nvidia HW, and needed some adaption to run as well on ROCm. This was for the exact same reasons that ROCm code will beat generic code compiled into ROCm, in the same way. Basically the Deepseek team, for their own purposes, created R-1 to fit Nvidia's way of doing things (because Nvidia is market-dominant) on their own. Once they released, someone like Elio or AMD would have to do the work of adapting the code to run best on ROCm.

For more established players who weren't out-of-left-field surprises like Deepseek, e.g. Meta's Llama series, mostly coordinate with AMD ahead of release day, but I suspect that AMD still has to pay for the engineering work themselves while Meta does the work to make it run on Nvidia themselves. This simple fact, that every researcher makes their stuff work on CUDA, but AMD or Elio has to do the work to move it over to get it to be as performant on AMD, is what keeps people in the CUDA universe.