frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Andrej Karpathy's YC AI SUS talk on the future of the industry

https://www.donnamagi.com/articles/karpathy-yc-talk
181•pudiklubi•4h ago

Comments

pudiklubi•3h ago
For context - I was in the audience when Karpathy gave this amazing talk on software 3.0. YC has said the official video will take a few weeks to release, by which Karpathy himself said the talk will be deprecated.

https://x.com/karpathy/status/1935077692258558443

levocardia•2h ago
To complete the loop, we need an AI avatar of Karpathy doing text-to-voice from the transcript. Who says AI can't boost productivity!
msgodel•1h ago
I listened to it with an old fashion CMU speech synth.
chrisweekly•2h ago
Do the talk's predictions about the future of the industry project beyond a few weeks? If so, I'd expect the salient points of the talk to remain valid. Hmm...
swyx•1h ago
i synced the slides with the talk transcript here : https://latent.space/s3
pudiklubi•1h ago
so you took my transcript and put it behind a newsletter sub? haha. just share them!
swyx•1h ago
not quite, i compiled the slides within a few hours of the talk yesterday well before your transcript was available. the slides are my main output/contribution. a full slides+transcript is too long for substack. i've linked your transcript prominently for people to find, and used it to fix slide ordering because twitter people took terrible notes for the purpose of exact talk reconstruction.

i exepct YC to prioritize publishing this talk so propbably the half life of any of this work is measured in days anyway.

100% of our podcast is published for free, but we still have ~1000 people who choose to support our work with a subscription (it does help pay for editors, equipment, and travel). I always feel bad that we dont have much content for them so i figured i'd put just the slide compilation up for subscribers. i'm trying to find nice ways to ramp up value for our subs over time, mostly by showing "work in progress" things like this that i had to do anyway to summarize/internalize the talk properly - which again is what we published entirely free/no subscription required

swyx•1h ago
btw i think your transcript is missing most of the Perplexity slide discussion, right after the Cursor example
theyinwhy•1h ago
What a poor judgement he must have if his outlook becomes irrelevant in a few weeks' time.

Edit: the emoji at the end of the original sentence has not been quoted. How a smile makes the difference. Original tweet: https://x.com/karpathy/status/1935077692258558443

theturtletalks•1h ago
It was in jest, more a take of how quickly things move in AI
qwertox•1h ago
We better stop talking about the future then.
scottyah•1h ago
Every time you talk about the future, it gets altered.
amarait•1h ago
Hard determinists would disagree
swah•3h ago
See also https://www.latent.space/p/s3
swyx•1h ago
thanks - i've now also updated the powerpoint with matched transcript to slides - so we are now fully confident in the slide order and you can basically watch the talk with slides
pudiklubi•1h ago
haha, love how we have the two pieces of the puzzle. we should merge!
fenghorn•2h ago
First time using NotebookLM and it blew my mind. I pasted in the OP's transcription of the talk into NotebookLM and got this "podcast": https://notebooklm.google.com/notebook/5ec54d65-f512-4e6c-9c...
steveBK123•1h ago
This sounds like an informercial
jckahn•1h ago
How so? That sounds a completely realistic use of NotebookLM.
steveBK123•1h ago
Just the tone, I mean I listen to podcasts a bit but.. yuck. I'd rather just read than listen to this.
jolan•1h ago
I think he meant the audio output itself sounds like an infomercial.

https://www.youtube.com/watch?v=gfr4BP4V1R8

steveBK123•1h ago
Correct, yes.

That NotebookLM podcast was like the most unpleasant way I can imagine to consume content. Reading transcripts of live talks is already pretty annoying because it's less concise than the written word. Having it re-expanded by robot-voice back to audio to be read to me just makes it even more unpleasant.

Also sort of perverse we are going audio->transcript->fake audio. "YC has said the official video will take a few weeks to release," - I mean shouldn't one of the 100 AI startups solve this for them?

Anyway, maybe it's just me.. I'm the kind of guy that got a cynical chuckle at the airport the other week when I saw a "magazine of audio books".

jasonjmcghee•1h ago
I have the same perspective - to such a degree that any time I see someone post a notebooklm I wonder if it's paid advertising. Every time I've tried it on something like a whitepaper etc. it just makes stuff up or says things that are kind of worthless. Reminds me of ChatGPT 3.5 in terms of quality of the presented content.

The voices sounded REALLY good the first time I used it. But then sounded exactly the same every time after that and became underwhelmed.

nico•1h ago
Thank you for putting it together, it was pretty good - 27m:12s
jimmy76615•2h ago
The talk is still not available on YouTube? What takes them so long?
layer8•1h ago
Apparently AI doesn’t make you a 10x YouTube releaser. ;)
datameta•1h ago
Certainly it can make you a 10x video releaser, but not a 10x Youtuber.
no_wizard•2h ago
A large contention of this essay (which I’m assuming the talk is based on or is transcribed from depending on order) I do think that open source models will eventually catch up to closed source ones, or at least be “good enough” and I also think you can already see how LLMs are augmenting knowledge work.

I don’t think it’s the 4th wave of pioneering a new dawn of civilization but it’s clear LLMs will remain useful when applied correctly.

umeshunni•1h ago
> I do think that open source models will eventually catch up to closed source ones

It felt like that was the direction for a while, but in the last year or so, the gap seems to have widened. I'm curious whether this is my perception or validated by some metric.

no_wizard•1h ago
This was how early browsers felt too, the open source browser engines were slower at adapting than the ones developed by Netscape and Microsoft, but eventually it all reversed and open source excelled past the closed source software.

Another way to put it, is that over time you see this, it usually takes a little while for open source projects to catch up, but once they do they gain traction quite quickly over the closed source counter parts.

tayo42•1h ago
Those were way simpler projects in the beginning when that happened. Like do you think a new browser would catch up today chrome now?
no_wizard•52m ago
The tech behind LLMs has been open source for a very long time. Look at DeepSeek and LLAMA for example. They aren’t yet as capable as say Gemini but they aren’t “miles behind” either, especially if you know how to tune the models to be purpose built[0].

The time horizons will be different as they always are, but I believe it will happen eventually.

I’d also argue that browsers got complicated pretty fast, long cry from libhtml in a few short years.

[0]: of which I contend most useful applications of this technology will not be the generalized ChatGPT interface but specialized highly tuned models that don’t need the scope of a generalized querying

msgodel•1h ago
Already today I can use aider with qwen3 for free but have to pay per token to use it with any of the commercial models. The flexibility is worth the lower performance.
QRY•1h ago
Do you have anything to share on that workflow? I've been trying to get a local-first AI thing going, could use your insights!
msgodel•1h ago
It's super easy. I already had llama.cpp/llama-server set up for a bunch of other stuff and actually had my own homebrew RAG dialog engine, aider is just way better.

One crazy thing is that since I keep all my PIM data in git in flat text I now have essentially "siri for Linux" too if I want it. It's a great example of what Karpathy was talking about where improvements in the ML model have consumed the older decision trees and coded integrations.

I'd highly recommend /nothink in the system prompt. Qwen3 is not good at reasoning and tends to get stuck in loops until it fills up its context window.

My current config is qwen2.5-coder-0.5b for my editor plugin and qwen3-8b for interactive chat and aider. I use nibble quants for everything. 0.5b is not enough for something like aider, 8b is too much for interactive editing. I'd also recommend shrinking the ring context in the neovim plugin if you use that since the default is 32k tokens which takes forever and generates a ton of heat.

bix6•56m ago
Why would open source outpace? Isn’t there way more money in the closed source ones and therefore more incentive to work on them?
adamnemecek•2h ago
AGI = approximating partition function. Everything else is just a poor substitute.
pera•2h ago
Is "Software 3.0" somehow related to "Web 3.0"?
fhd2•2h ago
Pure coincidence, I'm sure :)
pudiklubi•2h ago
No – for more context you can check out Karpathy's original essay from 2017: https://karpathy.medium.com/software-2-0-a64152b37c35
lcnPylGDnU4H9OF•1h ago
No, they are totally unrelated. Web 3.0 is blockchain-backed web applications (rather than proprietary server-backed web applications, which is Web 2.0) and Software 3.0 is LLM-powered agents.
knowaveragejoe•1h ago
What was Software 2.0?
jckahn•1h ago
And what was Software 1.0?
Karrot_Kream•1h ago
It's using NNs or ML models which are given datasets and learn using those datasets. https://karpathy.medium.com/software-2-0-a64152b37c35

If you read the talk you can find out this and more :)

lcnPylGDnU4H9OF•1h ago
Trained Neural Networks.
msgodel•1h ago
Early application specific ML models, Software 1.0 is normal programs.
uncircle•39m ago
They are related in that they are pure marketing buzzwords to build enormous hype around a product, if not a dream.
lvl155•2h ago
I soak up everything Andrej has to say.
msgodel•2h ago
This is almost exactly what I've experienced with them. It's a great talk, I wish I could have seen it in person.
afiodorov•1h ago
> So, it was really fascinating that I had the menu gem basically demo working on my laptop in a few hours, and then it took me a week because I was trying to make it do it

Reminds me of work where I spend more time figuring out how to run repos than actually modifying code. A lot of my work is focused on figuring out the development environment and deployment process - all with very locked down permissions.

I do think LLMs are likely to change industry considerably, as LLM-guided rewrites are sometimes easier than adding a new feature or fixing a bug - especially if the rewrite is something into more LLM-friendly (i.e., a popular framework). Each rewrite makes the code further Claude-codeable or Cursor-codeable; ready to iterate even faster.

andai•1h ago
The last 10% always takes 1000% of the time...
afiodorov•1h ago
I am not saying rewrites are always warranted, but I think LLMs change the cost-benefit balance considerably.
steveklabnik•1h ago
I am with you on this. I'm very much not sure about rewrites, but LLMs do change the cost-benefit balance of refactorings considerably, for me. Both in a "they let me make a more informed decision about proceeding with the refactoring" and "they are faster at doing it than I am".
alganet•1h ago
> imagine changing it and programming the computer's life

> imagine that the inputs for the car are on the bottom, and they're going through the software stack to produce the steering and acceleration

> imagine inspecting them, and it's got an autonomy slider

> imagine works as like this binary array of a different situation, of like what works and doesn't work

--

Software 3.0 is imaginary. All in your head.

I'm kidding, of course. He's hyping because he needs to.

Let's imagine together:

Imagine it can be proven to be safe.

Imagine it being reliable.

Imagine I can pre-train on my own cheap commodity hardware.

Imagine no one using it for war.

Henchman21•1h ago
If I’m going to be leaning on my imagination this much I am going to imagine a world where the tech industry considers at great length whether or not something should be built.
alganet•1h ago
Let me be clear about what I think: I have zero fear of an AI apocalypse. I think the fear is part of the scam.

The danger I see is related to psychological effects caused by humans using LLMs on other humans. And I don't think that's a scenario anyone is giving much attention to, and it's not that bad (it's bad, but not world end bad).

I totally think we should all build it. To be trained from scratch on cheap commodity hardware, so that a lot of people can _really_ learn it and quickly be literate on it. The only true way of democratizing it. If it's not that way, it's a scam.

serial_dev•1h ago
I tried to imagine all that he described and felt literally nothing. If he wants to hype AI, he should find his Steve Jobs.
msgodel•1h ago
It was easy for me to see and it's incredible. Maybe I should be launching a startup.
kaladin-jasnah•1h ago
Tangentially related, but it boggles my mind this guy was badmephisto, who made a quite famous cubing tutorial website that I spent plenty of time on in my childhood.
Frummy•1h ago
Totally not a supervillain

"Q: What does your name (badmephisto) mean?

A: I've had this name for a really long time. I used to be a big fan of Diablo2, so when I had to create my email address username on hotmail, i decided to use Mephisto as my username. But of course Mephisto was already taken, so I tried Mephisto1, Mephisto2, all the way up to about 9, and all was taken. So then I thought... "hmmm, what kind of chracteristic does Mephisto posess?" Now keep in mind that this was about 10 years ago, and my English language dictionary composed of about 20 words. One of them was the word 'bad'. Since Mephisto (the brother of Diablo) was certainly pretty bad, I punched in badmephisto and that worked. Had I known more words it probably would have ended up being evilmephisto or something :p"

Aeroi•1h ago
TL;DR: Karpathy says we’re in Software 3.0: big language models act like programmable building blocks where natural language is the new code. Don’t jump straight to fully autonomous “agents”—ship human-in-the-loop tools with an “autonomy slider,” tight generate-→verify loops, and clear GUIs. Cloud LLMs still win on cost, but on-device is coming. To future-proof, expose clean APIs and docs so these models (and coming agents) can safely read, write, and act inside your product.
arkj•1h ago
>Software 2.0 are the weights which program neural networks. >I think it's a fundamental change, is that neural networks became programmable with large libraries... And in my mind, it's worth giving it the designation of a Software 3.0.

I think it's a bit early to change your mind here. We love your 2.0, let's wait for some more time till th e dust settles so we can see clearly and up the revision number.

In fact I'm a bit confused about the number AK has in mind. Anyone else knows how he arrived at software 2.0?

I remember a talk by professor Sussman where he suggest we don't know how to compute, yet[1].

I was thinking he meant this,

Software 0.1 - Machine Code/Assembly Code Software 1.0 - HLLs with Compilers/Interpreters/Libraries Software 2.0 - Language comprehension with LLMs

If we are calling weights 2.0 and NN with libraries as 3.0, then shouldn't we account for functional and oo programming in the numbering scheme?

[1] https://www.youtube.com/watch?v=HB5TrK7A4pI

autobodie•1h ago
Objectivity is lacking throughout the entire talk, not only in the thesis. But objectivity isn't very good for building hype.
bigyabai•30m ago
Reminds me of Vitalik Buterin. I spent a lot of my starry-eyed youth reading his blog, and was hopeful that he was applying the learned-lessons from the early days of Bitcoin. Turned out he was fighting the wrong war though, and today Ethereum gets less lip service than your average shitcoin. The whole industry went up in flames, really.

Nerds are good at the sort of reassuring arithmetic that can make people confident in an idea or investment. But oftentimes that math misses the forest for the trees, and we're left betting the farm on a profoundly bad idea like Theranos or DogTV. Hey, I guess that's why it's called Venture Capital and not Recreation Investing.

Karrot_Kream•30m ago
I'm curious why you think that? I thought the talk was pretty grounded. There was a lot of skepticism of using LLMs unbounded to write software and an insistence on using ground truth free from LLM hallucination. The main thesis, to me, seemed like "we need to write software that was designed with human-centric APIs and UI patterns to now use an LLM layer in front and that'll be a lot of opportunity for software engineers to come."

If anything it seemed like the middle ground between AI boosters and doomers.

baxtr•30m ago
How can someone so smart become a hype machine? I can’t wrap my head around it. Maybe he had the opportunity to learn from someone he worked closely with?
TeMPOraL•17m ago
> How can someone so smart become a hype machine? I can’t wrap my head around it.

Maybe they didn't, and it's just your perception.

throwawayoldie•13m ago
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it." --Upton Sinclair
barrkel•7m ago
Maybe you haven't seen the frontier and envisioned the possibilities?
pests•1h ago
I think to understand how Andrej views 3.0 is hinted at with his later analogy at Tesla. He saw a ton of manually written Software 1.0 C++ replaced by the weights of the NN. What we used to write manually in explicit code is now incorporated into the NN itself, moving the implementation from 1.0 to 3.0.
bcrosby95•57m ago
I might be wrong, but it seems like some people are misinterpreting what is being said here.

Software 3.0 isn't about using AI to write code. It's about using AI instead of code.

So not Human -> AI -> Create Code -> Compile Code -> Code Runs -> The Magic Happens. Instead, it's Human -> AI -> The Magic Happens.

__loam•34m ago
This industry is so tiring
mattgreenrocks•21m ago
Definitely. And it gets more tiring the more experience you have, because you've seen countless hype cycles come and go with very little change. Each time, the same mantra is chanted: "but this time, it's different!" Except, it usually isn't.

Started learning metal guitar seriously to forget about industry as a whole. Highly recommended!

zie1ony•9m ago
This is great idea, until you have to build something.
bmicraft•5m ago
The AI isn't much easier, when you consider the "AI" step is actually: create dataset -> train model -> fine-tune model -> run model to train a much smaller model -> ship much smaller model to end devices.
imiric•4m ago
So... Who builds the AI?

This is why I think the AI industry is mostly smoke and mirrors. If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.

bredren•50m ago
Anyone know what "oil bank" was in the actual talk?
romain_batlle•49m ago
The analogy with the grid seems pretty good. The fab one seems bad tho.
mattlangston•47m ago
Very nice find @pudiklubi. Thank you.
uncircle•45m ago
AI sus talk. Kinda appropriate.
sammcgrail•43m ago
You’ve got “two bars” instead of “two rs” in strawberry
computator•21m ago
I was going to ask what this meant about strawberries:

> LLMs make mistakes that basically no human will make, like, you know, it will insist that 9.11 is greater than 9.9, or that there are two bars of strawberry. These are some famous examples.

But you answered it: It’s a stupid mistake a human makes when trying to mock the stupid mistakes that LLMs make!

amelius•34m ago
Does it say anything about how this will affect wealth distribution?

Reliability: It's Not Great

https://community.fly.io/t/reliability-its-not-great/11253
2•nateb2022•2m ago•0 comments

Built a tiny design tool that creates text behind image designs

1•dev_lemon•2m ago•0 comments

Calling Go from Elixir with a CNode in Crystal

https://relistan.com/calling-go-from-elixir-with-a-cnode
1•mmcclure•4m ago•0 comments

The Technical Face of Payments as a Service

https://news.alvaroduran.com/p/the-technical-face-of-payments-as
1•ohduran•5m ago•0 comments

S1: Simple Test-Time Scaling

https://arxiv.org/abs/2501.19393
2•bicepjai•5m ago•1 comments

After 10 years, we're past peak RGB, but don't celebrate yet, stealth PC purists

https://www.tomshardware.com/pc-components/after-10-years-were-past-peak-rgb-but-dont-celebrate-yet-stealth-pc-purists
1•LorenDB•8m ago•0 comments

Apple's Liquid Glass in the Browser

https://specy.app/blog/posts/liquid-glass-in-the-web
1•halb•8m ago•0 comments

The iPhone SE was the best phone Apple ever made, and now it's dead (2018)

https://techcrunch.com/2018/09/14/the-iphone-se-was-the-best-phone-apple-ever-made-and-now-its-dead/
3•Bluestein•8m ago•0 comments

Ireland Is Failing Palestine

https://blog.paulbiggar.com/ireland-is-failing-palestine/
1•reillyse•9m ago•0 comments

Testing the directional relationship between social media use and materialism

https://www.sciencedirect.com/science/article/abs/pii/S0191886924004070
1•tcfhgj•10m ago•0 comments

CNCF Slack workspace will be converted from an enterprise plan to a free plan

https://www.cncf.io/blog/2025/06/16/cncf-slack-workspace-changes-coming-on-friday-june-20/
2•ashnehete•13m ago•0 comments

Free access Udemy (1 day) – ML for EEG feature extraction. Practical Course

https://www.udemy.com/course/machine-learning-python-for-neuroscience-practical-course/?couponCode=8E7D652B0E84B1A45BD9
1•GaredFagsss•16m ago•0 comments

Glass Cage iOS 18.2 Vulnerability

https://substack.com/home/post/p-165608310
2•uartz•16m ago•0 comments

Halt and Catch Fire Syllabus

https://bits.ashleyblewer.com/halt-and-catch-fire-syllabus/
1•occamschainsaw•16m ago•0 comments

Websites Are Tracking You via Browser Fingerprinting

https://engineering.tamu.edu/news/2025/06/websites-are-tracking-you-via-browser-fingerprinting.html
2•gnabgib•17m ago•0 comments

PEP 779 – Criteria for supported status for free-threaded Python

https://peps.python.org/pep-0779/
1•thijsvandien•18m ago•0 comments

"A Crowd-Driven Platform That Lets People Vote

2•Mimikasunny•20m ago•0 comments

Show HN: Low-level Handwritten Digit recognition

https://github.com/AxelMontlahuc/CNN
2•axxderotation•22m ago•0 comments

Fast multiplayer 3D renderer, written in Rust

https://github.com/eschwart/blazed-demo
3•splurf•22m ago•0 comments

Show HN: I built an app to remember what I learn from articles and videos

https://apps.apple.com/us/app/curio-save-learn-retain/id6745309852
1•ssthomas•22m ago•0 comments

She Won. They Didn't Just Change the Machines. They Rewired the Election

https://thiswillhold.substack.com/p/she-won-they-didnt-just-change-the
2•sammyjoe72•24m ago•1 comments

Flock Safety Response to Illinois LPR Data Use and Out-of-State Sharing Concerns

https://www.flocksafety.com/articles/flock-safetys-response-to-illinois-lpr-data-use-and-out-of-state-sharing-concerns
1•toomuchtodo•25m ago•1 comments

Salter vs. Meta Platforms, Inc. Decision and Order (3/18/2024) [pdf]

https://business.cch.com/plsd/SaltervMetaPlatforms3-18-24.pdf
1•1vuio0pswjnm7•25m ago•0 comments

GPT-4o shows humanlike patterns of cognitive dissonance moderated by free choice

https://www.pnas.org/doi/10.1073/pnas.2501823122
4•PaulHoule•26m ago•1 comments

We Are All Victor Frankenstein: Our Romantic Dream of Artificial Intelligence

https://voegelinview.com/we-are-all-victor-frankenstein/
2•scruple•26m ago•0 comments

Earliest evidence of humans in the Americas confirmed in new study

https://news.arizona.edu/news/earliest-evidence-humans-americas-confirmed-new-u-study
2•geox•28m ago•0 comments

What's Wrong with That?: How Israel Trained and Armed an ISIS-Linked Gaza Milita

https://www.haaretz.com/israel-news/2025-06-11/ty-article-magazine/.premium/whats-wrong-how-israel-trained-and-armed-an-isis-linked-gazan-crime-militia/00000197-5aa3-deed-a9bf-5fef7d990000
1•cempaka•28m ago•0 comments

Show HN: Built a one-time tool for small biz owners who hate subscriptions

https://ordia.techwizardlabs.org/
1•perrii•29m ago•1 comments

Developer Resume as a VSCode UI

https://snouzy.com/
1•vvoyer•29m ago•0 comments

Pattern matching and exhaustiveness checking algorithms implemented in Rust

https://github.com/yorickpeterse/pattern-matching-in-rust
2•fanf2•30m ago•0 comments