frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Doodle: Compositional 2D Graphics

https://www.creativescala.org/doodle/
1•noelwelsh•1m ago•0 comments

The User Is Visibly Frustrated

https://pscanf.com/s/354/
1•francesco__b•2m ago•0 comments

Prompt Screener – Catches biased prompts before you send them to AI

https://chromewebstore.google.com/detail/prompt-screener/hdooilgdenkeeccfomlkkenhmelobcjm
1•jahaanshah•4m ago•0 comments

Odin, Wikipedia and Engagement Farming

https://katamari64.se/posts/2026/odin-wikipedia/
1•baranul•5m ago•0 comments

Francesca Albanese on the untapped power of international law

https://www.reuters.com/lifestyle/culture-current/francesca-albanese-untapped-power-international...
1•skibz•8m ago•0 comments

Removing fsync from our local storage engine

https://fractalbits.com/blog/remove-fsync/
3•zzsheng•12m ago•1 comments

Tired of "Move to left and decrease border radius"? Try Browser-Mutation

https://github.com/JosPMSilva/Browser-mutation
1•JosPMSilva•12m ago•0 comments

Ask HN: Degraded GPT-5.5 Quality?

1•ramon156•20m ago•0 comments

K8s-Container_escape_audit Version 3

https://github.com/liamromanis101/K8s-container_escape_audit
1•lromanis•26m ago•1 comments

Grok TTS: X's Latest TTS Model Sets a New Baseline

https://techstackups.com/articles/grok-tts-xai-text-to-speech-model/
1•ritzaco•28m ago•0 comments

Silicon oscillators solve problems in seconds using semiconductors

https://techxplore.com/news/2026-05-silicon-oscillators-problems-thousands-years.html
1•01-_-•30m ago•0 comments

U.S. and China Pursue Guardrails to Stop AI Rivalry from Spiraling into Crisis

https://www.wsj.com/world/china/u-s-and-china-pursue-guardrails-to-stop-ai-rivalry-from-spiraling...
1•01-_-•32m ago•0 comments

HomeDesignsAI

https://homedesigns.ai
2•bellamoon544•36m ago•0 comments

Progressively Improving a Ball of Mud

https://afilina.com/improving-ball-of-mud
2•luu•37m ago•0 comments

The best ideas come from the arena

https://www.reproof.app/blog/amex-history
1•maguay•40m ago•0 comments

ZAYA1-8B: An 8B Moe Model with 760M Active Params Matching DeepSeek-R1 on Math

https://firethering.com/zaya1-8b-open-source-math-coding-model/
1•steveharing1•41m ago•0 comments

Ask HN: Does a vetted marketplace improve hiring?

1•Sam6late•42m ago•0 comments

Subquadratic claims to have fixes attention scaling with 12M context window

https://twitter.com/alex_whedon/status/2051663268704636937
1•jiwidi•46m ago•0 comments

Who edited the date stamp of this post

https://xcancel.com/iamasoothsayer/status/1535494638391664641?s=46%0A
2•razodactyl•50m ago•0 comments

The Semantic Conception of Truth and the Foundations of Semantics(1944)

https://www.ditext.com/tarski/tarski.html
2•nill0•53m ago•0 comments

My first post scored 1. Karpathy's autoresearch idea helped me repost

https://github.com/meller/laneconductor
2•meller_a•58m ago•0 comments

Has Meta ever provided Qualcomm EDL files for recovery?

1•nar001•1h ago•0 comments

Mapping Project Complexity with AI

https://www.maiobarbero.dev/articles/project-complexity-ai-skills/
1•maiobarbero•1h ago•1 comments

Show HN: Production-Ready MERN Job Board Template

https://auditjobs.up.railway.app/
1•hlymrk•1h ago•0 comments

Show HN: Crypto Cards – 136 debit/credit cards, MIT-licensed list

https://github.com/mbtrilla/awesome-crypto-cards
1•mbtrilla•1h ago•0 comments

Message Brokers Are Modern Grids(2020)

https://yusufaytas.com/message-brokers-are-modern-grids
3•return_null•1h ago•0 comments

Show HN: Modolap, Improve the Reliability of Your Software Systems

https://modolap.com/
1•ronfriedhaber•1h ago•0 comments

I Wrote a Nix Flake for Helium Browser with Home Manager and NixOS Modules

https://github.com/oxcl/nix-flake-helium-browser
2•oxcl•1h ago•0 comments

Show HN: Social Network for Corporate Cringe

https://CringeOut.com
7•CringeOut•1h ago•5 comments

Plimpton 322 – Babylonian Clay table of triangles 1k years before Pythagoras

https://en.wikipedia.org/wiki/Plimpton_322
1•lifeisstillgood•1h ago•1 comments
Open in hackernews

Next-Gen GPU Programming: Hands-On with Mojo and Max Modular HQ

https://www.youtube.com/live/uul6hZ5NXC8?si=mKxZJy2xAD-rOc3g
44•solarmist•1y ago

Comments

solarmist•1y ago
I'm really hoping Modular.ai takes off. GPU programming seems like a nightmare, I'm not surprised they felt the need to build an entire new language to tackle that bog.
mirsadm•1y ago
GPU programming isn't really that bad. I am a bit skeptical this is the way to solve it. The issue is that details do matter when you're writing stuff on the GPU. How much shared memory are you using? How is it scheduled? Is it better to inline or run multiple passes etc. Halide is the closest I think.
solarmist•1y ago
What are you skeptical of? I believe the problem this is solving is a framework that's not CUDA that allows low level access to the hardware, makes it easy to write kernels, and is not Nvidia only. If you watch the video you can write directly in asm if you need to. You have full control if you want it. But it provides primitives and higher level objects that handle common cases.

I'm a novice in the area, but Chris is well respected in this area and cares a lot of about performance.

pjmlp•1y ago
There are already plenty of languages in CUDA world, that is one reasons it is favoured.

The problem isn't the language, rather how to design the data structures and algorithms for GPUs.

solarmist•1y ago
Not sure I fully understand your comment, but I'm pretty sure the talk addresses exactly that.

The primitives and pre-coded kernels provided by CUDA (it solves for the most common scenarios first and foremost) is what's holding things back and in order to get those algorithms and data structures down to the hardware level you need something flexible that can talk directly to the hardware.

pjmlp•1y ago
C, C++, Fortran, Python JIT from NVidia, plus Haskell, .NET, Java, Futuhark, Julia from third parties, and anything else that can bother to create a backend targeting PTX, NVVM IR, or now cuTile.

The pre-coded kernels help a lot, but you don't have to use them necessarly.

melodyogonna•1y ago
Yes, the problem isn't language, it is the entire stack. I think people focus too much on Mojo while ignoring the actual solution Modular has built, which is MAX. The main idea here is that MAX provides a consistent API for both library authors (e.g vLLM, Ollama) to target, as well as for hardware vendors to integrate with - so similar to LLVM.

Basically, imagine if you can target Cuda, but you don't have to do too much for your inference to also work on other GPU Vendors e.g AMD, Intel, Apple. All with performance matching or surpassing what the hardware vendors themselves can come up with.

Mojo comes into the picture because you can program Max with it, create custom kernels that is JIT compiled to the right vendor code at rumtime.

diabllicseagull•1y ago
It is a noble cause. I've spent ten years of my life using CUDA professionally, outside the AI domain mind you. Most of these years, there was a strong desire to break off of CUDA and the associated Nvidia tax on our customers. But one thing we didn't want was to move from depending on CUDA to depending on another intermediary which would also mean financial drain, like the enterprise licensing these folks want to use. Sadly, open source alternatives weren't fostering much confidence, either with their limited feature coverage or just not knowing if they will be supported in the long term (support for new hardware, fixes, etc.).
pjmlp•1y ago
Also while as language nerd I find Mojo cool, given NVidia's going full speed ahead with Python support in CUDA as announced at GTC 2025, to the point of designing a new IR as basis for their JIT, very few researchers will bother with Mojo.

Also what NVIDIA is doing has full Windows support, while Mojo support still isn't there, other than having to make use of WSL.

melodyogonna•1y ago
Why? Will the new Nvidia Python stuff work on AMD GPU and other non-nvidia accelerators?
pjmlp•1y ago
It still remains to be seen how much that will happen to Mojo and MAX, while most researchers are using CUDA anyway, and best of all, it works on their laptops, which cannot be said for AMD GPU and other non-nvidia accelerators.

Naturally assuming they are using laptops with NVidia GPUs.

catapart•1y ago
My mistake completely, but I thought this was going to be something to do with a new scheme or re-thinking of graphics programming APIs, like Metal, Vulkan or OpenGL. Now I'm kind of bummed that it is what it is, because I got really excited for it to be that other thing. =(
pjmlp•1y ago
That is already taking place with work graphs, and making shader languages more C++ like.
ttoinou•1y ago
Seems like with it you will be able to compile and execute one code on multiple GPU targets though
ashvardanian•1y ago
There is a "hush-hush open secret" between minutes 31 and 33 of the video :)
refulgentis•1y ago
TL;Dr same binary runs on Nvidia and ATI today, but not announced yet
throwaway314155•1y ago
They desperately need to disable whatever noise cancellation they're using on the audio. Keeps cutting out, sounds terrible.
solarmist•1y ago
Yeah, the mic quality was terrible.
hogepodge•1y ago
This was the first time we ran an event in the office with this wireless mic setup. We're definitely aware of the problems, and will have them fixed for the next event.
Archit3ch•1y ago
> Other Accelerators (e.g. Apple Silicon GPUs): free for <= 8 devices

From their license.

It's not obvious what happens when you have >8 users, with one GPU each (typical laptop users).

threecheese•1y ago
This is covered by ARM which they consider CPU, and doesn’t fall into that clause. IOW no restrictions.