> o3 finds the kerberos authentication vulnerability in 8 of the 100 runs
And I'd guess this only became a blog post because the author already knew about the vuln and was just curious to see if the intern could spot it too, given a curated subset of the codebase
What?
Wouldn't such an LLM be the closer -synth- version of a person who has worked on a codebase for years, learnt all its quirks etc.
There's so much you can fit on a high context, some codebases are already 200k Tokens just for the code as is, so idk
You can spend all day reading slop or you can get good at this yourself and be much more efficient at this task. Especially if you're the developer and know where to look and how things work already, catching up on security issues relevant to your situation will be much faster than looking for this needle in the haystack that is LLM output
We’ve found a wide range of results and we have a conference talk coming up soon where we’ll be releasing everything publicly so stay tuned for that itll be pretty illuminating on the state of the space
Edit: confusing wording
We have several deployments in other peoples clouds right now as well as usage of our own cloud version, so we're flexible here.
And it will do this no matter how many prompts you try or you forcefully you ask it.
How do you know if it triggered the vulnerability? Luckily for low-level memory safety issues like the ones Sean (and o3) found we have very good oracles for detecting memory safety, like KASAN, so you can basically just let the agent throw inputs at ksmbd until you see something that looks kind of like this: https://groups.google.com/g/syzkaller/c/TzmTYZVXk_Q/m/Tzh7SN...
Designing and building meaningfully testable non-trivial software is orders of magnitude more complex than writing the business logic itself. And that’s if you compare writing greenfield code from scratch. Making an old legacy code base testable in a way conducive to finding security vulns is not something you just throw together. You can be lucky with standard tooling like sanitizers and valgrind but it’s far from a panacea.
Every time a new frontier LLM is released (excluding LLMs that use input as training data) I run the interview questions through it. I’ve been surprised that my rate of working responses remains consistently around 1:10 for the first pass, and often takes upwards of 10 rounds of poking to get it to find its own mistakes.
So this level of signal to noise ratio makes sense for even more obscure topics.
Interviewees don't get to pick the language?
If you're hiring based on proficiency in a particular tech stack, I'm curious why. Are there that many candidates that you can be this selective? Is the language so dissimilar that the uninitiated would need a long time to get up to speed? Does the job involve working on the language itself and so a specifically deep understanding is required?
It can be annoying, but manageable. I've never coded in Java for example, but knowing C#, C++ and Python I imagine it wouldn't be too hard to pick up.
Regarding the job ads, yes they'd describe the ideal candidate but I haven't the experience that the perfect candidate ever actually shows up. Like you say, knowing J, T and Z, the company is confident enough that you'll be able to quickly pick up dotting the Is and crossing the 7s
For leetcode interviews, sure. Other than that, at least familiarity with the language is paramount, or with the same class of language.
Q1: Who is using ksmbd in production?
Q2: Why?
I researched quite extensively prior to landing on SMB, but it really seems like there isn't a better way of doing this. The environment was mixed windows/linux, but if there was a better pure linux solution I would've pushed our office staff to switch to Ubuntu.
2. Samba performance sucks (by comparison) which is why people still regularly deploy Windows for file sharing in 2025.
Anybody know if this supports native Windows-style ACLs for file permissions? That is the last remaining reason to still run Solaris but I think it relies on ZFS to do so.
Samba's reliance on Unix UID/GID and the syncing as part of its security model is still stuck in the 1970s unfortunately.
The caveat is the in-kernel SMB server has been the source of at least one holy-shit-this-is-bad zero-day remote root hole in Windows (not sure about Solaris) so there are tradeoffs.
Sigh. This is why we can't have nice things
Like yeah having smb in kernel is faster but honestly it's not fundamentally faster. But it seems the will to make samba better isn't there
The "don't blame the victim" trope is valid in many contexts. This one application might be "hackers are attacking vital infrastructure, so we need to fund vulnerabilities first". And hackers use AI now, likely hacked into and for free, to discover vulnerabilities. So we must use AI!
Therefore, the hackers are contributing to global warming. We, dear reader, are innocent.
[1] https://techcrunch.com/2025/04/02/openais-o3-model-might-be-...
Oh my god - the world is gonna end. Too bad, we panicked because of exaggerated energy consumption numbers for using an LLM when doing individual work.
Yes - when a lot of people do a lot of prompting, these 0ne tenth of a second to 8 seconds of running the microwave per prompt adds up. But I strongly suggest, that we could all drop our energy consumption significantly using other means, instead of blaming the blog post's author about his energy consumption.
The "lot of burned coal" is probably not that much in this blog post's case given that 1 kWh is about 0.12 kg coal equivalent (and yes, I know that we need to burn more than that for 1kWh. Still not that much, compared to quite a few other human activities.
If you want to read up on it, James O'Donnell and Casey Crownhart try to pull together a detailed account of AI energy usage for MIT Technology Review.[1] I found that quite enlightening.
[1]: https://www.technologyreview.com/2025/05/20/1116327/ai-energ...
Because I definitely don't care. Energy expenditure numbers are always used in isolation, lest any one have to deal with anything real about them, and always are content to ignore the abstraction which electricity is - namely, electricity is not coal. It's electricity. Unlike say, driving my petrol powered car, the power for my computers might come from solar panels, coal, nuclear power stations, geothermal power hydro...
Which is to say, if people want to worry about electricity usage: go worry about it by either building more clean energy, or campaigning to raise electricity prices.
So about 50% of CO2 emissions in Germany come from 20 sources. The campaigns like personal footprint (invented by BP) are there to shift the blame to consumers. Away from those with the biggest impact and the most options for action.
So yes, I f**ng don’t care if a security researcher leaves his microwave equivalent running for a few minutes. But I care, campaign in the bigger sense and also orient my own consumption wherever possible towards cleaner options.
Full well knowing that even as mostly being reasonable in my consumption, I definitely belong to those 5-10% of earth's population who drive the problem. Because more than half of the population in the so called first world live according to the Paris Climate Agreement. And it’s not the upper half of.
Usually LLMs come out far ahead in those types of calculations. Compared to humans they are quite energy efficient
When the cost of inference gets near zero, I have no idea what the world of cyber security will look like, but it's going to be a very different space from today.
We are facing global climate change event, yet continue to burn resources for trivial shit like it’s 1950.
Does this really reflect the resource cost of finding this vulnerability?
But this poster actually understands the AI output and is able to find real issues (in this case, use-after-free). From the article:
> Before I get into the technical details, the main takeaway from this post is this: with o3 LLMs have made a leap forward in their ability to reason about code, and if you work in vulnerability research you should start paying close attention. If you’re an expert-level vulnerability researcher or exploit developer the machines aren’t about to replace you. In fact, it is quite the opposite: they are now at a stage where they can make you significantly more efficient and effective.
Certainly for Govt agencies and others this will not be a factor. It is just for everyone else. This will cause people to use other models and agents without these restrictions.
It is safe to assume that a large number of vulnerabilities exist in important software all over the place. Now they can be found. This is going to set off arms race game theory applied to computer security and hacking. Probably sooner than expected...
This is likely because the author didn't give Claude a scratchpad or space to think, essentially forcing it to mix its thoughts with its report. I'd be interested to see if using the official thinking mechanism gives it enough space to get differing results.
For example, I think 2.5 Pro and Claude 4 are probably better at programming. But, for debugging, or not-super-well-defined reasoning tasks, or even just as a better search, o3 is in a league of its own. It feels like it can do a wider breadth of tasks than other models.
[0] https://arxiv.org/pdf/2201.11903
[1] https://docs.anthropic.com/en/docs/build-with-claude/extende...
It reveals how good LLM use, like any other engineering tool, requires good engineering thinking – methodical, and oriented around thoughtful specifications that balance design constraints – for best results.
It all seems like vibes-based incantations. "You are an expert at finding vulnerabilities." "Please report only real vulnerabilities, not any false positives." Organizing things with made-up HTML tags because the models seem to like that for some reason. Where does engineering come into it?
> In fact my entire system prompt is speculative in that I haven’t ran a sufficient number of evaluations to determine if it helps or hinders, so consider it equivalent to me saying a prayer, rather than anything resembling science or engineering. Once I have ran those evaluations I’ll let you know.
This seems more like wishful thinking and fringe stuff than CS.
The interesting thing here is the LLM can come to very complex correct answers some of the time. The problem space of understanding and finding bugs is so large that this isn't just by chance, it's not like flipping a coin.
The issue for any particular user is the amount of testing required to make this into science is really massive.
Of course, if you try to do that for all of the potential false positives that's going to take a _lot_ of tokens, but then we already spend a lot of CPU cycles on fuzzing so depending on how long you let the LLM churn on trying to get a PoC maybe it's still reasonable.
He shows how the prompt is parsed etc. Very nice and eye opening. Also superstition dispelling
> Use XML tags to structure your prompts
> There are no canonical “best” XML tags that Claude has been trained with in particular, although we recommend that your tag names make sense with the information they surround.
https://docs.anthropic.com/en/docs/build-with-claude/prompt-...1. Having workflows to be able to provide meaningful context quickly. Very helpful.
2. Arbitrary incantations.
I think No. 2 may provide some random amounts of value with one model and not the other, but as a practitioner you shouldn't need to worry about it long-term. Patterns models pay attention to will change over time, especially as they become more capable. No. 1 is where the value is at.
As my example as a systems grad student, I find it a lot more useful to maintain a project wiki with LLMs in the picture. It makes coordinating with human collaborators easier too, and I just copy paste the entire wiki before beginning a conversation. Any time I have a back-and-forth with an LLM about some design discussions that I want archived, I ask them to emit markdown which I then copy paste into the wiki. It's not perfectly organized but it keeps the key bits there and makes generating papers etc. that much easier.
I use these prompts everywhere. I get significantly better results mostly because it encourages backtracking and if I were to guess, enforces a higher confidence threshold before acting.
The expert engineering ones usually end up creating mountains of slop, refactoring things, and touching a bunch of code it has no business messing with.
I also have used lazy prompts: "You are positively allergic to rewriting anything that already exists. You have multiple mcps at your disposal to look for existing solutions and thoroughly read their documentation, bug reports, and git history. You really strongly prefer finding appropriate libraries instead of maintaining your own code"
Now I wonder how the model reasons between the two words in that black box of theirs.
However, as I was testing it, it would do reckless and irresponsible things. After I changed it, as far as bot communication, to "Do-Ur-Inspection" mode and it became radically better.
None of the words you give it are free from consequences. It didn't just discard the "DUI" name as a mere title and move on. Fascinating lesson.
Quantitative benchmarks are not necessary anyway. A method either gets results or it doesn't.
I'm not objecting to the incantations or the vibes per se. I'm happy to use AI and try different methods to get the results I want. I just don't understand the claims that prompting is a type of engineering. If it were, then you would need benchmarks.
But yeah prompt engineering is a field for a reason, as it takes time and experience to get it right.
Problem with LLMs as well is that it’s inherently probabilistic, so sometimes it’ll just choose an answer with a super low probability. We’ll probably get better at this in the next few years.
You just described one critical aspect of engineering: discovering a property of a system and feeding that knowledge back into a systematic, iterative process of refinement.
If the act of discovery and iterative refinement makes prompting an engineering discipline, then is raising a baby also an engineering discipline?
> then is raising a baby also an engineering discipline?
The key to science and engineering is repeatability. Raising a baby is an N=1 trial, no guarantees of repeatability.
The author deserves more credit here, than just "vibing".
Those prompts should be renamed as hints. Because that's all they are. Every LLM today ignores prompts if they conflict with its sole overarching goal: to give you an answer no matter whether it's true or not.
At its heart that all engineering principles exist to do. Allow us to extract useful value, and hopefully predictable outcomes from systems that are either poorly understood, or too expensive to economically characterise. Engineering is more-or-less the science of “good enough”.
There’s a reason why computer science, and software engineering are two different disciplines.
> Software engineering is the application of an empirical, scientific approach to finding efficient, economic solutions to practical problems in software.
> The adoption of an engineering approach to software development is important for two main reasons. First, software development is always an exercise in discovery and learning, and second, if our aim is to be “efficient” and “economic,” then our ability to learn must be sustainable.
> This means that we must manage the complexity of the systems that we create in ways that maintain our ability to learn new things and adapt to them.
That is why I don't care about LLMs per se, but their usage is highly correlated to the wish of the user to not learn anything, just have some answer, even incorrect, as long as it passes the evaluation process (compilation, review, ci tests,..). If the usage is to learn, I don't have anything to say.
As for efficient and economical solutions that can be found with them,...
I’ve personally found them extremely useful to test and experiment new ideas. Having an LLM throw together a PoC which would have taken me an hour to create, in less than 5mins, is a huge time saver. Makes it possible to iterate through many more ideas and test my understanding of systems far more efficiently than doing the same by hand.
For the rest of us less fortunate, LLMs can be a fantastic tool to sketch out novel modules quickly, and then test assumptions and interactions between them, before committing to a specific high level design.
Not really. It's just that there's a lot of prior works out there, so I don't need to do experimentation when someone has already done it and describe the lessons learned. Then you do requirement analysis and some designs (system, api, and ux), plus with the platform constraints, there aren't a lot of flexible points left. I'm not doing research on software engineering.
For a lot of projects, the objective is to get something working out there. Then I can focus on refining if needs be. I don't need to optimize every parameter with my own experiments.
I’m currently dealing with a project that involves developing systems where the existing prior art is either completely proprietary and inaccessible, or public, but extremely nacient and thus documented learnings are less developed than our own learnings and designs.
Many projects may have the primary objective of getting something working. But we don’t all have the luxury of being able to declare something working and walk away. I specifically have requirements around long term evolution of our project (I.e. over a 5-10 year time horizon at a minimum), plus long term operational burden and cost. While also delivering value in the short term.
LLM provide are an invaluable tool for exploring the many possible solutions to what we’re building, and helping to evaluate the longer term consequences of our design decisions, before we’ve committed significant resources to developing them completely.
Of course we could do all this without LLMs, but LLMs substantially increase the distance we can explore before timelines force us to commit.
> Most of my coding is fully planned to get to the end. The experiment part is on a much smaller scale (module level).
I would seem that these statements taken together mean you don’t experiment at all?
Ah I think we’re finally getting somewhere. My point is that you can use LLM as part of that research process. Not just as a poor substitute for proper research, but as a tool for experimental research. It’s supplemental to the normal research process, and is certainly not a tool for creating final outputs.
Using LLMs like that can make a meaningful difference to speed and quality of the analysis and final design. And something you should consider, rather than dismissing out of hand.
What's the alternative?
It’s reasonable to scope one’s interest down to easily predictable, simple systems.
But most of the value in math and computer science is at the scale where there is unpredictability arising from complexity.
But a lot of the trouble in these domains that I have observed comes from unmodeled effects, that must be modeled and reasoned about. GPZ work shows the same thing shown by the researcher here, which is that it requires a lot of tinkering and a lot of context in order to produce semi-usable results. SNR appears quite low for now. In security specifically, there is much value in sanitizing input data and ensuring correct parsing. Do you think LLMs are in a position to do so?
In the hands of an expert, I believe they can help. In the hands of someone clueless, they will just confuse everyone, much like any other tool the clueless person uses.
If your C compiler invents a new function call for a non-existent function while generating code, that's usually a bug.
If an LLM does, that's... Normal. And a non-event.
What other engineering domain operates on a fundamentally predictable substrate? Even computer science at any appreciable scale or complexity becomes unpredictable.
An engineer doesn't just shrug and pick up slag because it contains the same materials as the original bauxite.
We’re basically in the stone ages of understanding how to interact with synthetic intelligence.
But attempts to integrate little understood things in daily life gave us radium toothpaste and lead poisoning. Let's not repeat stone age mistakes. Research first, integrate later.
You invoke "engineering principles", but software engineers constantly trade in likelihoods, confidence intervals, and risk envelopes. Using LLMs is no different in that respect. It's not rocket science. It's manageable.
Software engineering is mostly about dealing with human limitations (both the writer of the code and its readers). SO you have principles like modularization and cohesion which is for the people working on the code, not the computer. We also have tests, which is an imperfect, but economical approach to ensure the correctness of the software. Every design decision can be justified or argued and the outcome can be predicted and weighted. You're not cajoling a model to get results. You take a decision and just do it.
I like to think of them as beginnings of an arbitrary document which I hope will be autocompleted in a direction I find useful... By an algorithm with the overarching "goal" of Make Document Bigger.
(As an engineer it’s part of your job to know if the problem is being solved correctly.)
Are you Insinuating that dealing with unstable and unpredictable systems isn't somewhere engineering principles are frequently applied to solve complex problems?
> In fact my entire system prompt is speculative so consider it equivalent to me saying a prayer, rather than anything resembling science or engineering
Just like Eisenhower's famous "plans are useless, planning is indispensable" quote. The muscle you build is creating new plans, not memorizing them.
It’s surprisingly effective to ask LLMs to help you write prompts as well, i.e. all my prompt snippets were designed with help of an LLM.
I personally keep them all in an org-mode file and copy/paste them on demand in a ChatGPT chat as I prefer more “discussion”-style interactions, but the approach is the same.
Works incredibly well, and I created it with its own help.
The more you can frame the problem with your expertise, the better the results you will get.
[1] https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...
I recently found a pretty serious security vulnerability in an open source very niche server I sometimes use. This took virtually no effort using LLMs. I'm worried that there is a huge long tail of software out there which wasn't worth finding vulnerabilities in for nefarious means manually but if it was automated could lead to really serious problems.
I wouldn't (personally) call it an alignment issue, as such.
I agree after time you end up with a steady state but in the short medium term the attackers have a huge advantage.
> I tried to strongly guide it to not report false positives, and to favour not reporting any bugs over reporting false positives. I have no idea if this helps, but I’d like it to help, so here we are. In fact my entire system prompt is speculative in that I haven’t ran a sufficient number of evaluations to determine if it helps or hinders, so consider it equivalent to me saying a prayer, rather than anything resembling science or engineering. Once I have ran those evaluations I’ll let you know.
...if your linux kernel has ksmbd built into it; that's a much smaller interest group
This is I suppose an area where the engineer can apply their expertise to build a validation rig that the LLM may be able to utilize.
You could in theory automate the entire process, treat the LLM as a very advanced fuzzer. Run it against your target in one or more VMs. If the VM crashes or otherwise exhibits anomalous behavior, you've found something. (Most exploits like this will crash the machine initially, before you refine them.)
On one hand: great application for LLMs.
On the other hand: conversely implies that demonstrating this doesn't mean that much.
(Also yeah feels like the "FIRST!!1!eleven" thing metastasized from comment sections into C-level executives…)
rest of the link is tracking to my (limited) understanding
https://lwn.net/Articles/871866/ This is also nothing to do with Samba which is a well trodden path.
So why not attack a codebase that is rather more heavily used and older? Why not go for vi?
4 years after the article, does any relevant distro have that implementation enabled?
[0] https://security.googleblog.com/2024/11/leveling-up-fuzzing-...
[1] https://googleprojectzero.blogspot.com/2024/10/from-naptime-...
What the post says is "Understanding the vulnerability requires reasoning about concurrent connections to the server, and how they may share various objects in specific circumstances. o3 was able to comprehend this and spot a location where a particular object that is not referenced counted is freed while still being accessible by another thread. As far as I'm aware, this is the first public discussion of a vulnerability of that nature being found by a LLM."
The point I was trying to make is that, as far as I'm aware, this is the first public documentation of an LLM figuring out that sort of bug (non-trivial amount of code, bug results from concurrent access to shared resources). To me at least, this is an interesting marker of LLM progress.
But what if there's a missing piece of the puzzle that the author and devs missed or assumed o3 covered, but in fact was out of o3's context, that would invalidate this vulnerability?
I'm not saying there is, nor am I going to take the time to do the author's work for them, rather I am saying this report is not fully validated which feels like a dangerous precedent to set with what will likely be an influential blog post in the LLM VR space moving forward.
IMO the idea of PoC || GTFO should be applied more strictly than ever before to any vulnerability report generated by a model.
The underlying perspective that o3 is much better than previous or other current models still remains, and the methodology is still interesting. I understand the desire and need to get people to focus on something by wording it a specific way, it's the clickbait problem. But dammit, do better. Build a PoC and validate your claims, don't be lazy. If you're going to write a blog post that might influence how vulnerability researchers conduct their research, you should promote validation and not theoretical assumption. The alternative is the proliferation of ignorance through false-but-seemingly-true reporting, versus deepening the community's understanding of a system through vetted and provable reports.
1) If it is actually a UAF or if there is some other mechanism missing from the context that prevents UAF. 2) The category and severity of the vulnerability. Is it even a DoS, RCE, or is the only impact causing a thread to segfault?
This is all part of the standard vulnerability research process. I'm honestly surprised it got merged in without a PoC, although with high profile projects even the suggestion of a vulnerability in code that can clearly be improved will probably end up getting merged.
I'm curious which sector of infosec you're referring to in which vulnerability researchers are not required to provide proofs of concept? Maybe internal product VR where there is already an established trust?
Since you’re interested: the bug is real but it is, I think, hard to exploit in real world scenarios. I haven’t tried. The timing you need to achieve is quite precise and tight. There are better bugs in ksmbd from an exploitation point of view. All of that is a bit of a “luxury problem” from the PoV of assessing progress in LLM capabilities at finding vulnerabilities though. We can worry about ranking bugs based on convenience for RCE once we can reliably find them at all.
Yeah race conditions like that are always tricky to make reliable. And yeah I do realize that the purpose of the writeup was more about the efficacy of using LLMs vs the bug itself, and I did get a lot out of that part, I just hyper-focused on the bug because it's what I tend to care the most about. In the end I agree with your conclusion, I believe LLMs are going to become a key part of the VR workflow as they improve and I'm grateful for folks like yourself documenting a way forward for their integration.
Anyways, solid writeup and really appreciate the follow-up!
You've got all the elements for a successful optimization algorithm: 1) A fast and good enough sampling function + 2) a fairly good energy function.
For 1) this post shows that LLMs (even unoptimized) are quite good at sampling candidate vulnerabilities in large code bases. A 1% accuracy rate isn't bad at all, and they can be made quite fast (at least very parallelizable).
For 2) theoretically you can test any exploit easily and programmatically determine if it works. The main challenge is getting the energy function to provide gradient—some signal when you're close to finding a vulnerability/exploit.
I expect we'll see such a system within the next 12 months (or maybe not, since it's the kind of system that many lettered agencies would be very interested in).
Have a problem with clear definition and evaluation function. Let LLM reduce the size of solution space. LLMs are very good at pattern reconstruction, and if the solution has a similar pattern to what was known before, it can work very well.
In this case the problem is a specific type of security vulnerability and the evaluator is the expert. This is similar in spirit to other recent endeavors where LLMs are used in genetic optimization; on a different scale.
Here’s an interesting read on “Mathematical discoveries from program search with large language models” which was I believe was also featured in HN the past:
https://www.nature.com/articles/s41586-023-06924-6
One small note, concluding that the LLM is “reasoning” about code just _based on this experiment_ is bit of a stretch IMHO.
I think the NSA already has this, without the need for a LLM.
zison•1mo ago
mdaniel•1mo ago