That NotebookLM podcast was like the most unpleasant way I can imagine to consume content. Reading transcripts of live talks is already pretty annoying because it's less concise than the written word. Having it re-expanded by robot-voice back to audio to be read to me just makes it even more unpleasant.
Also sort of perverse we are going audio->transcript->fake audio. "YC has said the official video will take a few weeks to release," - I mean shouldn't one of the 100 AI startups solve this for them?
Anyway, maybe it's just me.. I'm the kind of guy that got a cynical chuckle at the airport the other week when I saw a "magazine of audio books".
The voices sounded REALLY good the first time I used it. But then sounded exactly the same every time after that and became underwhelmed.
I don’t think it’s the 4th wave of pioneering a new dawn of civilization but it’s clear LLMs will remain useful when applied correctly.
It felt like that was the direction for a while, but in the last year or so, the gap seems to have widened. I'm curious whether this is my perception or validated by some metric.
Another way to put it, is that over time you see this, it usually takes a little while for open source projects to catch up, but once they do they gain traction quite quickly over the closed source counter parts.
The time horizons will be different as they always are, but I believe it will happen eventually.
I’d also argue that browsers got complicated pretty fast, long cry from libhtml in a few short years.
[0]: of which I contend most useful applications of this technology will not be the generalized ChatGPT interface but specialized highly tuned models that don’t need the scope of a generalized querying
One crazy thing is that since I keep all my PIM data in git in flat text I now have essentially "siri for Linux" too if I want it. It's a great example of what Karpathy was talking about where improvements in the ML model have consumed the older decision trees and coded integrations.
I'd highly recommend /nothink in the system prompt. Qwen3 is not good at reasoning and tends to get stuck in loops until it fills up its context window.
My current config is qwen2.5-coder-0.5b for my editor plugin and qwen3-8b for interactive chat and aider. I use nibble quants for everything. 0.5b is not enough for something like aider, 8b is too much for interactive editing. I'd also recommend shrinking the ring context in the neovim plugin if you use that since the default is 32k tokens which takes forever and generates a ton of heat.
If you read the talk you can find out this and more :)
Reminds me of work where I spend more time figuring out how to run repos than actually modifying code. A lot of my work is focused on figuring out the development environment and deployment process - all with very locked down permissions.
I do think LLMs are likely to change industry considerably, as LLM-guided rewrites are sometimes easier than adding a new feature or fixing a bug - especially if the rewrite is something into more LLM-friendly (i.e., a popular framework). Each rewrite makes the code further Claude-codeable or Cursor-codeable; ready to iterate even faster.
> imagine that the inputs for the car are on the bottom, and they're going through the software stack to produce the steering and acceleration
> imagine inspecting them, and it's got an autonomy slider
> imagine works as like this binary array of a different situation, of like what works and doesn't work
--
Software 3.0 is imaginary. All in your head.
I'm kidding, of course. He's hyping because he needs to.
Let's imagine together:
Imagine it can be proven to be safe.
Imagine it being reliable.
Imagine I can pre-train on my own cheap commodity hardware.
Imagine no one using it for war.
The danger I see is related to psychological effects caused by humans using LLMs on other humans. And I don't think that's a scenario anyone is giving much attention to, and it's not that bad (it's bad, but not world end bad).
I totally think we should all build it. To be trained from scratch on cheap commodity hardware, so that a lot of people can _really_ learn it and quickly be literate on it. The only true way of democratizing it. If it's not that way, it's a scam.
"Q: What does your name (badmephisto) mean?
A: I've had this name for a really long time. I used to be a big fan of Diablo2, so when I had to create my email address username on hotmail, i decided to use Mephisto as my username. But of course Mephisto was already taken, so I tried Mephisto1, Mephisto2, all the way up to about 9, and all was taken. So then I thought... "hmmm, what kind of chracteristic does Mephisto posess?" Now keep in mind that this was about 10 years ago, and my English language dictionary composed of about 20 words. One of them was the word 'bad'. Since Mephisto (the brother of Diablo) was certainly pretty bad, I punched in badmephisto and that worked. Had I known more words it probably would have ended up being evilmephisto or something :p"
I think it's a bit early to change your mind here. We love your 2.0, let's wait for some more time till th e dust settles so we can see clearly and up the revision number.
In fact I'm a bit confused about the number AK has in mind. Anyone else knows how he arrived at software 2.0?
I remember a talk by professor Sussman where he suggest we don't know how to compute, yet[1].
I was thinking he meant this,
Software 0.1 - Machine Code/Assembly Code Software 1.0 - HLLs with Compilers/Interpreters/Libraries Software 2.0 - Language comprehension with LLMs
If we are calling weights 2.0 and NN with libraries as 3.0, then shouldn't we account for functional and oo programming in the numbering scheme?
Nerds are good at the sort of reassuring arithmetic that can make people confident in an idea or investment. But oftentimes that math misses the forest for the trees, and we're left betting the farm on a profoundly bad idea like Theranos or DogTV. Hey, I guess that's why it's called Venture Capital and not Recreation Investing.
If anything it seemed like the middle ground between AI boosters and doomers.
Maybe they didn't, and it's just your perception.
Software 3.0 isn't about using AI to write code. It's about using AI instead of code.
So not Human -> AI -> Create Code -> Compile Code -> Code Runs -> The Magic Happens. Instead, it's Human -> AI -> The Magic Happens.
Started learning metal guitar seriously to forget about industry as a whole. Highly recommended!
This is why I think the AI industry is mostly smoke and mirrors. If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.
> LLMs make mistakes that basically no human will make, like, you know, it will insist that 9.11 is greater than 9.9, or that there are two bars of strawberry. These are some famous examples.
But you answered it: It’s a stupid mistake a human makes when trying to mock the stupid mistakes that LLMs make!
pudiklubi•3h ago
https://x.com/karpathy/status/1935077692258558443
levocardia•2h ago
msgodel•1h ago
chrisweekly•2h ago
swyx•1h ago
pudiklubi•1h ago
swyx•1h ago
i exepct YC to prioritize publishing this talk so propbably the half life of any of this work is measured in days anyway.
100% of our podcast is published for free, but we still have ~1000 people who choose to support our work with a subscription (it does help pay for editors, equipment, and travel). I always feel bad that we dont have much content for them so i figured i'd put just the slide compilation up for subscribers. i'm trying to find nice ways to ramp up value for our subs over time, mostly by showing "work in progress" things like this that i had to do anyway to summarize/internalize the talk properly - which again is what we published entirely free/no subscription required
swyx•1h ago
theyinwhy•1h ago
Edit: the emoji at the end of the original sentence has not been quoted. How a smile makes the difference. Original tweet: https://x.com/karpathy/status/1935077692258558443
theturtletalks•1h ago
qwertox•1h ago
scottyah•1h ago
amarait•1h ago