frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Michael Burry Speaks [video]

https://www.youtube.com/watch?v=nsE13fvjz18
1•avonmach•2m ago•0 comments

Why Sourcegraph and Amp Are Becoming Independent Companies

https://sourcegraph.com/blog/why-sourcegraph-and-amp-are-becoming-independent-companies
2•janpio•2m ago•0 comments

O'Saasy License Agreement

https://www.fizzy.do/license
1•hahahacorn•4m ago•0 comments

Bending Spoons Acquires Eventbrite

https://www.businesswire.com/news/home/20251202408560/en/Eventbrite-Enters-into-Definitive-Agreem...
3•crivabene•4m ago•0 comments

The Enshittification of Plex Is Kicking Off, Starting with Free Roku Users

https://gizmodo.com/the-enshittification-of-plex-is-kicking-off-starting-with-free-roku-users-200...
1•Bender•5m ago•0 comments

Eventbrite to go private in a $500M deal

https://seekingalpha.com/news/4527545-ticketing-platform-eventbrite-to-go-private-in-a-500m-deal-...
2•ewf•5m ago•0 comments

Google Antigravity vibe-codes user's drive out of existence

https://www.theregister.com/2025/12/01/google_antigravity_wipes_d_drive/
1•Bender•6m ago•1 comments

Amp, Inc. – Amp is spinning out of Sourcegraph

https://ampcode.com/news/amp-inc
9•pdubroy•7m ago•0 comments

Pwning OpenAI Atlas Through Exposed Browser Internals

https://www.hacktron.ai/blog/hacking-openai-atlas-browser
1•bearsyankees•8m ago•0 comments

Show HN: Validation system eliminates 90% of AI code failures (97.8% accuracy)

https://transformationagents.ai/webinar
1•buttersmoothAI•8m ago•0 comments

I Fed Claude 7 Years of Daily Journals. It Showed Me the Future of AI

https://medium.com/swlh/i-fed-claude-7-years-of-daily-journals-it-showed-me-the-future-of-ai-2c13...
1•Franz23•10m ago•0 comments

Just Use Postgres

https://www.manning.com/books/just-use-postgres
1•agentdrek•11m ago•1 comments

Sam Altman Declares 'Code Red' as Google's Gemini Surges

https://fortune.com/2025/12/02/sam-altman-declares-code-red-google-gemini-ceo-sundar-pichai/
3•tantalor•11m ago•2 comments

Churches withdraw investments from fossil fuels

https://dpa-international.com/politics/urn:newsml:dpa.com:20090101:251118-99-672846/
2•amai•12m ago•0 comments

Study: How Social media use impacts teen body image

https://twin-cities.umn.edu/news-events/how-social-media-use-impacts-teen-body-image
1•giuliomagnifico•14m ago•0 comments

Two paths to Enlightenment: AV Linux 25 and MX Moksha step forward

https://www.theregister.com/2025/12/02/av_linux_25/
1•Bender•17m ago•0 comments

Why Distributed Teams Need Uniform Headshots for Trust and Cohesion

https://www.aiheadshotreviews.com/articles/remote-team-headshots-trust-cohesion
1•naveensky•17m ago•0 comments

Let's talk about feral kids(2024)

https://www.cfabo.org/blog/lets-talk-about-feral-kids
1•rolph•18m ago•1 comments

Ask HN: Saving/restoring web-app state–useful or pointless?

1•niteshnagpal•20m ago•0 comments

Show HN: JustHTML – A pure Python HTML5 parser that just works

https://github.com/EmilStenstrom/justhtml
2•EmilStenstrom•20m ago•0 comments

Garden of Eden

https://en.wikipedia.org/wiki/Garden_of_Eden_(cellular_automaton)
1•downboots•21m ago•0 comments

Sam Altman declares 'code red' to improve ChatGPT amid rising competition

https://apnews.com/article/openai-chatgpt-code-red-google-gemini-00d67442c7862e6663b0f07308e2a40d
2•geox•21m ago•2 comments

LLM from scratch, part 28 – training a base model from scratch on an RTX 3090

https://www.gilesthomas.com/2025/12/llm-from-scratch-28-training-a-base-model-from-scratch
1•gpjt•22m ago•0 comments

Ask HN: Is a non-engineer's AI co-thinking log useful to anyone?

1•ys-oh•22m ago•0 comments

Show HN: WhatDoYouDo – A subscription-free rolodex/Relationship management tool

https://whatdoyoudo.xyz/
1•yashesmaistry•23m ago•0 comments

How to Fix an Unbearably Slow iCloud Drive

https://danielmiessler.com/blog/fix-slow-icloud
1•speckx•23m ago•0 comments

Fusionauth-JWT v6.0.0 Released

https://github.com/FusionAuth/fusionauth-jwt/releases/tag/6.0.0
1•mooreds•23m ago•0 comments

The biggest AI win I've experienced

https://github.com/calebmadrigal/fuzzygraph/pull/4
1•calebm•25m ago•1 comments

30 years of SOHO staring at the sun

https://www.space.com/astronomy/sun/30-years-of-soho-staring-at-the-sun-space-photo-of-the-day-fo...
1•almosthere•27m ago•0 comments

Show HN: 1T row challenge in 76s using DuckDB and 10,000 CPUs

https://docs.burla.dev/examples/process-2.4tb-in-parquet-files-in-76s
2•pancakeguy•28m ago•0 comments
Open in hackernews

Next-Gen GPU Programming: Hands-On with Mojo and Max Modular HQ

https://www.youtube.com/live/uul6hZ5NXC8?si=mKxZJy2xAD-rOc3g
44•solarmist•7mo ago

Comments

solarmist•7mo ago
I'm really hoping Modular.ai takes off. GPU programming seems like a nightmare, I'm not surprised they felt the need to build an entire new language to tackle that bog.
mirsadm•7mo ago
GPU programming isn't really that bad. I am a bit skeptical this is the way to solve it. The issue is that details do matter when you're writing stuff on the GPU. How much shared memory are you using? How is it scheduled? Is it better to inline or run multiple passes etc. Halide is the closest I think.
solarmist•7mo ago
What are you skeptical of? I believe the problem this is solving is a framework that's not CUDA that allows low level access to the hardware, makes it easy to write kernels, and is not Nvidia only. If you watch the video you can write directly in asm if you need to. You have full control if you want it. But it provides primitives and higher level objects that handle common cases.

I'm a novice in the area, but Chris is well respected in this area and cares a lot of about performance.

pjmlp•7mo ago
There are already plenty of languages in CUDA world, that is one reasons it is favoured.

The problem isn't the language, rather how to design the data structures and algorithms for GPUs.

solarmist•7mo ago
Not sure I fully understand your comment, but I'm pretty sure the talk addresses exactly that.

The primitives and pre-coded kernels provided by CUDA (it solves for the most common scenarios first and foremost) is what's holding things back and in order to get those algorithms and data structures down to the hardware level you need something flexible that can talk directly to the hardware.

pjmlp•7mo ago
C, C++, Fortran, Python JIT from NVidia, plus Haskell, .NET, Java, Futuhark, Julia from third parties, and anything else that can bother to create a backend targeting PTX, NVVM IR, or now cuTile.

The pre-coded kernels help a lot, but you don't have to use them necessarly.

melodyogonna•7mo ago
Yes, the problem isn't language, it is the entire stack. I think people focus too much on Mojo while ignoring the actual solution Modular has built, which is MAX. The main idea here is that MAX provides a consistent API for both library authors (e.g vLLM, Ollama) to target, as well as for hardware vendors to integrate with - so similar to LLVM.

Basically, imagine if you can target Cuda, but you don't have to do too much for your inference to also work on other GPU Vendors e.g AMD, Intel, Apple. All with performance matching or surpassing what the hardware vendors themselves can come up with.

Mojo comes into the picture because you can program Max with it, create custom kernels that is JIT compiled to the right vendor code at rumtime.

diabllicseagull•7mo ago
It is a noble cause. I've spent ten years of my life using CUDA professionally, outside the AI domain mind you. Most of these years, there was a strong desire to break off of CUDA and the associated Nvidia tax on our customers. But one thing we didn't want was to move from depending on CUDA to depending on another intermediary which would also mean financial drain, like the enterprise licensing these folks want to use. Sadly, open source alternatives weren't fostering much confidence, either with their limited feature coverage or just not knowing if they will be supported in the long term (support for new hardware, fixes, etc.).
pjmlp•7mo ago
Also while as language nerd I find Mojo cool, given NVidia's going full speed ahead with Python support in CUDA as announced at GTC 2025, to the point of designing a new IR as basis for their JIT, very few researchers will bother with Mojo.

Also what NVIDIA is doing has full Windows support, while Mojo support still isn't there, other than having to make use of WSL.

melodyogonna•7mo ago
Why? Will the new Nvidia Python stuff work on AMD GPU and other non-nvidia accelerators?
pjmlp•7mo ago
It still remains to be seen how much that will happen to Mojo and MAX, while most researchers are using CUDA anyway, and best of all, it works on their laptops, which cannot be said for AMD GPU and other non-nvidia accelerators.

Naturally assuming they are using laptops with NVidia GPUs.

catapart•7mo ago
My mistake completely, but I thought this was going to be something to do with a new scheme or re-thinking of graphics programming APIs, like Metal, Vulkan or OpenGL. Now I'm kind of bummed that it is what it is, because I got really excited for it to be that other thing. =(
pjmlp•7mo ago
That is already taking place with work graphs, and making shader languages more C++ like.
ttoinou•7mo ago
Seems like with it you will be able to compile and execute one code on multiple GPU targets though
ashvardanian•7mo ago
There is a "hush-hush open secret" between minutes 31 and 33 of the video :)
refulgentis•7mo ago
TL;Dr same binary runs on Nvidia and ATI today, but not announced yet
throwaway314155•7mo ago
They desperately need to disable whatever noise cancellation they're using on the audio. Keeps cutting out, sounds terrible.
solarmist•7mo ago
Yeah, the mic quality was terrible.
hogepodge•7mo ago
This was the first time we ran an event in the office with this wireless mic setup. We're definitely aware of the problems, and will have them fixed for the next event.
Archit3ch•7mo ago
> Other Accelerators (e.g. Apple Silicon GPUs): free for <= 8 devices

From their license.

It's not obvious what happens when you have >8 users, with one GPU each (typical laptop users).

threecheese•7mo ago
This is covered by ARM which they consider CPU, and doesn’t fall into that clause. IOW no restrictions.