frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•2m ago•0 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
1•m00dy•3m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•4m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
1•okaywriting•11m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•13m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•14m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•15m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•16m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•16m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•17m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•17m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•21m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•21m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•22m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•22m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•31m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•31m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•33m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•33m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•33m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
4•pseudolus•34m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•34m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•35m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•36m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•36m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•37m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•38m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•40m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•41m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•42m ago•0 comments
Open in hackernews

CUDA Tile Open Sourced

https://github.com/NVIDIA/cuda-tile
201•JonChesterfield•1mo ago

Comments

jauntywundrkind•1mo ago
Will be interesting to see if Nvidia and other have any interest & energy getting this used by others, if there actually is an ecosystem forming around it.

Google leading XLA & IREE, with awesome intermediate representations, used by lots of hardware platforms, and backing really excellent Jax & Pytorch implementations, having tools for layout & optinization folks can share: they really build an amazing community.

There's still so much room for planning/scheduling, so much hardware we have yet to target. RISC-V has really interesting vector instructions, for example, and it seems like there's so much exploration / work to do to better leverage that.

Nvidia has partners everywhere now. Nvlink is used by Intel, AWS Tritanium, others. Yesterday the Groq exclusive license that Nvidia paid to give to Groq?! Seeing how and when CUDA Tiles emerges: will be interesting. Moving from fabric partnerships, up up up the stack.

turtletontine•1mo ago
On the RISC-V vector instructions, could you elaborate? Are the vector extensions substantially different from those in ARM or x86?
adgjlsfhk1•1mo ago
it's fairly similar to Arm's sve2, but very different from the x86 side in that the instructions are variable length rather than fixed
Moosdijk•1mo ago
> There's still so much room for planning/scheduling, so much hardware we have yet to target

this is nicely illustrated by this recent article:

https://news.ycombinator.com/item?id=46366998

saagarjha•1mo ago
Wrong type of scheduling.
Moosdijk•1mo ago
Thanks for correcting me. Can you point me to what I need to search for to understand the differences?
saagarjha•1mo ago
https://en.wikipedia.org/wiki/Instruction_scheduling
pjmlp•1mo ago
For NVidia it suffices this is a Python JIT allowing programming CUDA compute kernels directly in Python instead of C++, yet another way how Intel and AMD, alongside Khronos APIs, lag behind in great developer experiences for GPU compute programming.

Ah, and Nsight debugging also supports Python CUDA Tiles debugging.

https://developer.nvidia.com/blog/simplify-gpu-programming-w...

Q6T46nT668w6i3m•1mo ago
Slang is a fantastic developer experience.
pjmlp•1mo ago
Especially when using the tooling from who created it, before offering it to Khronos as GLSL replacement, NVIDIA.
Conscat•1mo ago
I work at Nvidia, and my team is using Slang for all of our (numerous and non-trivial) kernels because its automatic differentiation type system is so nice.
saagarjha•1mo ago
Nsight does not have a debugger.
dahart•1mo ago
What do you mean? Are you unaware of Nsight VSE? https://developer.nvidia.com/nsight-visual-studio-edition
saagarjha•1mo ago
I was aware of their Visual Studio plugins but I did not know that they called their debugger support for Visual Studio “Nsight” as well.
pjmlp•1mo ago
Yes it does, apparently you never used it.
almostgotcaught•1mo ago
> Google leading XLA & IREE

IREE hasn't been at G for >2 years.

nl•1mo ago
> Groq exclusive license

non-exclusive license actually.

CamperBob2•1mo ago
Fun game: see how many clicks it takes you to learn what MLIR stands for.

I lost count at five or six. Define your acronyms on first use, people.

fragmede•1mo ago
I did it in three. I selected it in your comment, and then had to hit "more" to get to the menu to ask Google about it, which brought me to https://www.google.com/search?q=MLIR which says: MLIR is an open-source compiler infrastructure project developed as a sub-project of the LLVM project. Hopefully

Get better at computers and stop needing to be spoon-fed information, people!

reactordev•1mo ago
In this day and age, asking questions about what something is is a minefield of “just ask AI” and “You should know this”. Let’s stop putting down people who ask questions and root out those that have shitty answers.
ThrowawayTestr•1mo ago
Google is nearly 30 years old
pjmlp•1mo ago
And we are not counting Yahoo, Altavista, Ask Jeeves, MSN,...
fragmede•1mo ago
I get why it feels frustrating when someone snaps "just google it." Nobody likes feeling dumb. That said, there’s a meaningful difference between asking a genuine question and demanding that every discussion be padded to accommodate readers who won’t even type four letters into a search bar. Expecting complete spoon-feeding in technical threads isn’t curiosity; it’s a refusal to engage. Learning requires participation.
CamperBob2•1mo ago
You're posting a spirited defense of substandard technical writing. Just curious -- why is that?
guipsp•1mo ago
You cannot explain everything to everyone all the time. Besides, this is not even a paper. Sometimes you are not the target audience and have to put some words into Google.
fragmede•1mo ago
Because I think the norm we reinforce here actually matters.

When confusion gets framed as "this is substandard writing", it rewards showing up and performing a lack of context rather than engaging with the substance or asking clarifying questions. Over time that creates pressure to write to the lowest common denominator, instead of the audience the author is clearly aiming at.

HN already operates on an implicit baseline (CUDA, open source, LLVM, etc.) and mostly lets comments fill in gaps. That usually produces better discussions than treating every unfamiliar term as an author failure, especially when someone is just trying to share or explain something they care about.

So yeah, I am genuinely curious why you see personal unfamiliarity as something the entire discussion should reorganize itself around.

CamperBob2•1mo ago
When confusion gets framed as "this is substandard writing", it rewards showing up and performing a lack of context rather than engaging with the substance or asking clarifying questions. Over time that creates pressure to write to the lowest common denominator, instead of the audience the author is clearly aiming at. ... So yeah, I am genuinely curious why you see personal unfamiliarity as something the entire discussion should reorganize itself around.

(Shrug) The fact is that all major style guides -- APA, MLA, AP, Chicago, probably some others -- call for potentially-unfamiliar acronyms to be defined on first use, and it's common enough to do so. For some reason, though, essentially nobody who writes about this particular topic agrees with that.

Which is cool -- it's not my field, so I don't really GAF. I'm mostly just remarking on how unusually difficult it was to drill down on this particular term. I'll avoid derailing the topic further than I already have.

reactordev•1mo ago
Easy, if that’s how you feel, skip the comment and don’t engage.

Telling people who want to have that participation and discussion to “RTFM” is not a good response.

Often you’ll come across the authors on these posts that can shed direct, 1st person evidence, of what we’re talking about.

So please, when someone asks “what is that?” Don’t respond with “RTFM”.

fragmede•1mo ago
Asking "what is this?" is fine. Treating "I was unfamiliar with this" as evidence that the post is deficient is not.

HN already assumes a baseline of technical literacy. When something falls outside that baseline, the usual move is to ask for context or links, not to reframe personal unfamiliarity as an author failure.

So please, don’t normalize treating "I don’t know this yet" as a failure of the post.

reactordev•1mo ago
I agree but if someone asks “What is this?” and it’s not covered by the article, what we shouldn’t do is put that person down by telling them to “just google it”.

If that is your answer, please just don’t comment.

pluralmonad•1mo ago
But not defining acronyms on first use is a failure of etiquette. Its your prerogative to not hold this to be true, but many of us do. There is little value in eliding the on-first-use definition.
VTimofeenko•1mo ago
> Learning requires participation

I won't argue, but there is a middle ground between articles consisting of pure JAFAs and this:

> accommodate readers who won’t even type four letters into a search bar

I think it helps if acronyms are expanded at least once or in a footnote so that the potential new reader can follow along and does not need to guess what ACMV^ means.

^: Awesome Combobulating Method by VTimofeenko, patent pending.

poita66•1mo ago
And yet you didn’t tell us what it stands for, just what it is. The person you’re responding to was specifically talking about finding out what it stands for
iaebsdfsh•1mo ago
From Wikipedia: The name "Multi-Level Intermediate Representation" reflects the system’s ability to model computations at various abstraction levels and progressively lower them toward machine code.
roughly•1mo ago
The ol’ TMA problem.
piskov•1mo ago
If only there was a chat-based app that you could ask questions to.
ipnon•1mo ago
GPU programming definitely is not beginner friendly. There's a much higher learning curve than most open source projects. To learn basic Python you need to know about definitions and loops and variables, but to learn CUDA kernels you need to know maybe an order of magnitude more concepts to write anything useful. It's just not worth the time to cater to people who don't RTFM, the README would be twice as long and be redundant to the target audience of the library.
CamperBob2•1mo ago
That's the whole problem. I had to "R" multiple "FMs" before one of them bothered to define the acronym.

Stop carrying water for poor documentation practice.

ipnon•1mo ago
It's kind of like if the Django README explained how SQL works, the structure of HTTP requests, best practices for HTML, and so on. If you don't know what MLIR is, you might not be the target audience for this library. Nvidia in general doesn't prioritize developer experience as much as companies like Meta do for open source projects like React.
CamperBob2•1mo ago
HTTP and HTML are very common acronyms; nobody should be getting out of high school these days without knowing them, and if they somehow managed to do so, they're darned sure not reading HN. Even SQL is pretty hard to avoid if you've been in an IT-adjacent industry for a while.

However, MLIR is a highly-specialized term. The problem with failing to define a term like that is that I don't know up front if I'm the target audience for the article. I had to Google it, and when I did that, all I found at first were yet more articles that failed to define it.

Wikipedia gets the job done, but these days, Wikipedia is often a long way down the Google search results list. I think they downranked it when they started force-feeding AI answers (which also didn't help).

__patchbit__•1mo ago
Use the AI prompt to pinprick learn.

Just say to the AI, "Explain THIS".

RobotToaster•1mo ago
ChatGPT Told me MLIR stands for "Modern Life Is Rubbish"
reactordev•1mo ago
YMMV
CamperBob2•1mo ago
HN: "Learning is good"

Just say to the AI, "Explain THIS".

Also HN: "Not like that"

saagarjha•1mo ago
This is a GitHub repo for compiler engineers.
CamperBob2•1mo ago
Cool. This is a site for hackers of all stripes.
saagarjha•1mo ago
Yes, so given that you clearly had trouble figuring out what it was, maybe you could have shared with the class?
bigyabai•1mo ago
I don't give "finance hackers" or "growth hackers" the time of day. Many hackers are held in utter contempt, and often for a very good reason.
CamperBob2•1mo ago
I'm afraid I have some really bad news about your humble hosts here at Hacker News, then.
rswail•1mo ago
Based on the use of LLVM I guessed "Machine Learning Intermediate Representation"?

How close was I?

xmorse•1mo ago
Writing this in Mojo would have been so much easier
3abiton•1mo ago
It's barely gaining adoption though. The lack of buzz is a chicken and egg issue for Mojo. I fiddled shortly with it (mainly to get it working some of my pythong scripts), and it was suprisingly easy. It'll shoot up one day for sure if Latner doesn't give up early on it.
ronsor•1mo ago
Isn't the compiler still closed source? I and many other ML devs have no interest in a closed-source compiler. We have enough proprietary things from NVIDIA.
0x696C6961•1mo ago
Yeah, the mojo pitch is so good, but I don't think anyone has an appetite for the potential fuckery that comes with a closed source platform.
3abiton•1mo ago
Yes, but Latner said multiple time it's closed until it matures (he apparently did this with llvm and swift too). So not unusal. His open source target is end of 2026. In all fairness, I have 0 doubts that he would deliver.
pjmlp•1mo ago
Given Swift for Tensorflow, lets see how this one goes.
saagarjha•1mo ago
That one did get open sourced but nobody ended up wanting to use it
jacobgorm•1mo ago
Who would anyone want to pair a subpar language with a subpar ML framework?
pjmlp•1mo ago
That is the thing, what lessons were learnt from it, and how will Mojo tackle them.
boredatoms•1mo ago
I feel like its in AMD/Intel/G’s interest to pile a load of effort into (an open source) mojo
bigyabai•1mo ago
Use-cases like this are why Mojo isn't used in production, ever. What does Nvidia gain from switching to a proprietary frontend for a compiler backend they're already using? It's a legal headache.

Second-rate libraries like OpenCL had industry buy-in because they were open. They went through standards committees and cooperated with the rest of the industry (even Nvidia) to hear-out everyone's needs. Lattner gave up on appealing to that crowd the moment he told Khronos to pound sand. Nobody should be wondering why Apple or Nvidia won't touch Mojo with a thirty-nine and a half foot pole.

xmorse•1mo ago
Kernels now written in Mojo were all in hand written in MLIR like in this repo. They made a full language because that's not scalable, a sane language is totally worth it. Nvidia will probably end up buying them in a few years.
bigyabai•1mo ago
I don't think Nvidia would acquire Mojo when the Triton compiler is open source, optimized for Nvidia hardware and considered a industry standard.
saagarjha•1mo ago
Nobody is writing MLIR by hand, what are you on about? There are so many MLIR frontends
pjmlp•1mo ago
NVidia is perfectly fine with C++ and Python JIT.

CUDA Tile was exactly designed to give parity to Python in writing CUDA kernels, acknowledging the relevance of Python, while offering a path researchers don't need to mess with C++.

It was announced at this years GTC.

NVidia has no reason to use Mojo.

itsthecourier•1mo ago
what about a fourty feet pole? would it be viable?
oedemis•1mo ago
how mojo with max optimize the process?
pjmlp•1mo ago
It would help if they were not so much macOS and Linux focused.

Julia, Python GPU JITs work great on Windows, and many people only get Windows systems as default at work.

bigyabai•1mo ago
I've commissioned a board of MENSA members to devise a workaround for this issue; they've identified two potential solutions.

1) Install Linux

2) Summon Chris Lattner to play you a sad song on the world's smallest violin in honor of the Windows devs that refuse to install WSL.

pjmlp•1mo ago
I go with customers keep using CUDA with Python and Julia, ignore Chris Latter's company exists, while Mojo repeats Swift for Tensorflow history.

What about that outcome?

saagarjha•1mo ago
Approximately nobody writing high performance code for AI training is using Windows. Why should they target it?
pjmlp•1mo ago
As desktop, and sometimes that is the only thing available.

When is the Year of NPUs on Linux?

saagarjha•1mo ago
This targets Blackwell GPUs so I’m not sure what you are talking about
pjmlp•1mo ago
The same, hardware available for Windows users, as work devices at several companies, used by researchers that work at said companies,

https://www.pcspecialist.de/kundenspezifische-laptops/nvidia...

Which as usual, kind of work but not really, in GNU/Linux.

llmslave2•1mo ago
I really want Mojo to take off. Maybe in a few years. The lack of an stdlib holds it back more than they think, and since their focus is narrow atm it's not useful for the vast majority of work.
ipsum2•1mo ago
Mojo is not open source and would not get close to the performance of cuTile.

I'm tired of people shilling things they don't understand.

almostgotcaught•1mo ago
it's all over this thread (and every single other hn thread about GPU/ML compilers) - people quoting random buzzword/clickbait takes.
boywitharupee•1mo ago
shouldn't the title be "CUDA Tile IR Open Sourced"?
OneDeuxTriSeiGo•1mo ago
It's more or less the same thing. CUDA TIle is the name of the IR, cuTile is the name of the high level DSLs.
toolboxg1x0•1mo ago
NVIDIA tensor core units, where the second column in kernel optimization is producing a test suite.
opan•1mo ago
>The CUDA Tile IR project is under the Apache License v2.0 with LLVM Exceptions
fooblaster•1mo ago
Let's see if developers sleepwalk into another trap to keep us locked into nvidia's hardware for the next decade.
the__alchemist•1mo ago
IMO it's not Nvidia's fault the competing APIs are high friction.
flyingcoder•1mo ago
AMD screwed up so badly.
fooblaster•1mo ago
That is true, but that doesn't mean Nvidia is not engaging in engineering to intentionally kneecap competition. Triton and other languages like that are a huge threat and CUtile is a means to combat that threat and prevent a hardware abstraction layer.
positron26•1mo ago
Hundreds of thousands of developers with access to a global communication network were not stopped by AMD. Why act like dependents or wait for some bright star of consensus unless the intent is really about getting the work for free?

We don't have to wait for singular companies or foundations to fix ecosystem problems. Only the means of coordination are needed. https://prizeforge.com isn't there yet, but it is already capable of bootstrapping its own development. Matching funds, joining the team, or contributing on MuTate will all make the ball pick up speed faster.

nemothekid•1mo ago
>We don't have to wait for singular companies or foundations to fix ecosystem problems.

Geohot has been working on this for about a year, and every roadblock he's encountered he has had to damn near pester Lisa Su about getting drivers fixed. If you want the CUDA replacement that would work on AMD, you need to wait on AMD. If there is a bug in the AMD microcode, you are effectively "stopped by AMD".

positron26•1mo ago
We have to platform and organize people, not rely on lone individuals. If there is a deep well of aligned interest, that interest needs a way to represent itself so that AMD has something to talk to, on a similar footing as a B2B relationship. When you work with other companies with hundreds and thousands of employees, it's natural that emails from individuals get drowned out or misunderstood as circulated around.
nemothekid•1mo ago
Geohot isn't working by himself - it's part of his B2B company, tinygrad, that sells AMD systems and is VC funded.

https://tinygrad.org/#tinybox

You can see in his table he calls out his AMD system as having "Good" GPU support, vs. "Great" for nvidia. So, yes, I would argue he is doing the work to platform and organize people, on a professional level to sell AMD systems in a sustainable manner - everything you claim that needs to be done and he is still bottlenecked by AMD.

positron26•1mo ago
> everything you claim that needs to be done

A single early-stage company is not ecosystem-scale organization. It is instead the legacy benchmark to beat. This is what we do today because the best tools in our toolbox are a corporation or a foundation.

Whether AMD stands to benefit from doing more or less, we are likely in agreement that Tinygrad is a small fraction of the exposed interest and that if AMD were in conversation with a more organized, larger fraction of that interest, that AMD would do more.

I'm not defending AMD doing less. I am insisting that ecosystems can do more and that the only reason they don't is because we didn't properly analyze the problems or develop the tools.

trueismywork•1mo ago
TileIR is Apache licensed so AMD can implement it as well.
OneDeuxTriSeiGo•1mo ago
CUDA Tile is an open source MLIR Dialect so it wouldn't take much to write MLIR transforms to map it from the Tile IR to TOSA or gpu + vector + some amdgpu or other specialty dialects.

The Tile dialect is pretty much independent of the nvidia ecosystem so all it takes is one good set of MLIR transform passes to run anything on the CUDA stack that compiles to tile out of the nvidia ecosystem prison.

So if anything this is actually a massive opportunity to escape vendor lock in if it catches on in the CUDA ecosystem.

saagarjha•1mo ago
Yes, but why would you want to use this over the other MLIR dialects that are already cross platform?
OneDeuxTriSeiGo•1mo ago
That's not really the point. The point is that Nvidia is updating a lot of their higher level CUDA tooling to integrate with and compile to Tile IR. So this gives an escape hatch for tools built on top of CUDA to deploy outside the ecosystem.
RobotToaster•1mo ago
Or it's Nvidia doing an Embrace Extend Extinguish on MLIR.
trueismywork•1mo ago
TileIR license means llvm can just fork and support it themselves as needed.
pjmlp•1mo ago
It is up to AMD, Intel and Khronos to offer APIs and tools that are actually nice to use.

They have had about 15 years to move beyond C99, stone age workflows to compile GLSL and C99 with their drivers, no libraries ecosystem, and printf debugging.

Eventually some of the issues have been fixed, after they started seeing only hardliners would put with such development experience, and then it was too late.

tester756•1mo ago
Isn't there OneAPI with its huge ecosystem of tools, debuggers, etc?
pjmlp•1mo ago
Yes, that is part of "it was too late".

OneAPI builds on top of SYSCL, is basically Intel's CUDA, which it is already the second attempt to have C++ in OpenCL, during OpenCL 2.x, an effort that worked so well, that OpenCL 3.0 is basically a reboot back to OpenCL 1.0.

Also even SYSCL only got a proper kick-off after CodePlay came up with its implementation, nowadays they sell oneAPI support and tooling, after being acquired by Intel.

RicoElectrico•1mo ago
Obviously they will, as with the mainframe and cloud.
gaogao•1mo ago
The compiler for CUDA Tile being Blackwell only is a baffling decision. I wanted to try it out, but it's only really easy to grab H100s quickly right now. I guess maybe I'll try it out on my 5070 Ti after traveling, but am more likely to stick to an IR that targets multiple platforms, since they couldn't be bothered.
robobsolete•1mo ago
I was keen to try it too, but oh well
0-_-0•1mo ago
This is basically the nvidia equivalent of cooperative_matrix_2 in Vulkan which is vendor agnostic and should get much more hype that it's getting.
pjmlp•1mo ago
Maybe Vulkan could provide native support for Python, C++20, and a graphical debugging experience.

It is surely not equivalent as of today.

0-_-0•1mo ago
Or even just pointers...
pyuser583•1mo ago
I’m glad CUDA and “open source” are in the same sentence again.

We’d all prefer cross platform programming, but if you’re going to do platform specific, I prefer open source to closed source.

Thank you NVIDIA!