If you can reliably automate that, it's still a pretty big deal.
Disclaimer: co-author of atopile here
The author's prompt is basically already a meticulous specification of the PCB, even proactively telling the LLM to avoid certain pitfalls ("GPIO19 and GPIO20 on the ESP32-S3 module are USB D- and D+ respectively. Make sure these nets are labeled correctly so that differential routing works"). If you had no prior experience building that exact thing, writing that spec would be 95% of the work.
Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!
Nowadays most mainstream LLMs support pre-bundled prompts. GitHub Copilot even made it a major feature and tools like Visual Studio Code have integrated support for prompt files.
https://docs.github.com/en/github-models/use-github-models/s...
Also, LLMs can generate prompt files too. I recommend you set aside 10 minutes of your time to vibe-code a prompt file for PCB generation, and then try to recreate the same project as OP. You'd be surprised.
> Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!
I don't agree. Vibecoding doesn't exactly mean naive approaches to implementations. It just means you enter higher level inputs to generate whatever you're creating.
Sure, but the utility of that for PCB design wasn't demonstrated in the article. This is an expert going out of his way to give the LLM a task it can't fumble (and still does, a bit).
Forget about the article. Try it yourself. Set aside 5 or 10 minutes to ask any LLM of your choice to generate a LLM prompt to generate PCBs. Iterate over your prompt before using it to generate your PCB. See the result for yourself.
It’s amazing that this worked at all, but to be clear this layout is actually very bad. Just look at that minimum width trace used to carry power across the entire board and into the ESP32. Using min width traces and wrapping them and min clearance to components is a classic mistake of people (or LLMs?) that have zero understanding of PCB layout techniques beyond “draw lines until everything is connected”
It would be interesting to see if you could feed the file into an LLM and get it to produce the feedback.
Good question. KiCAD once had a router, built in, or sort of built in, but it was taken out for licensing reasons. So who's doing that?
So what did this project use?
ETA: Other commenters suspect a traditional autorouter based on the poor layout quality. I agree that's also possible, and nothing in the video excludes that. It definitely wasn't the LLM, though.
I assumed the author was more experienced, I suppose this is more of an entry level hobbyist blog. There are some very fundamental problems with routing PCBs like this that are covered in introductory materials.
Also, it certainly wasn't the LLM; atopile doesn't allow you to specify routing as far as I'm aware, their docs seem to tell you to route in KiCad.
- impressive that this worked so well with LLM-generated atopile, given that atopile is about a year old!
- the hardest part of a PCB is still the routing and nonstandard parts of the design; what this did is basically "find a reference design, pick components that match the reference design, and put them on the correct nets" which is the easiest part of the process for people designing PCBs today
- much like with code, 99% of PCBs designed are fairly basic boards implementing the reference design with some small tweaks, and then there is a tiny amount of envelope-pushing designs/crazy complex stuff. Obviously you can't design some fancy PCB with complex RF with this, but give it some time and I'd bet you can probably make a lot of the basic stuff...
As said noob, do you have any resources for basic PCB design/routing? Along the lines of a simple list of things to look out for?
I've only ever done one, and for routing I basically did the "make two ground pours, then keep clicking until everything is connected" process that others have described in this thread. Probably about the same as I'd imagine an autorouter would have. And it seems like it worked fine in the end. But I'm wondering what obvious things I probably missed, and what the consequences are to missing them? PCB layout articles online seem to quickly get into topics like differential pair length matching, high-frequency / RF circuits, optimizing current return paths, controlled impedance, and so on... none of which I imagine will ever be relevant to me as a hobbyist.
Is it really so implausible that these constraints could be built into the process/algorithm/agentic workflow?
Edit: Also, one could just look to the world of decision tree and route-finding algorithms that could probably do this task better than a language model.
It's like how pairing a coding agent that can run unit tests and iterate is way more powerful than code gen alone.
I don't care about this one example project, but when thousands of people read about it and vibe-code their own hallucinated PCB, hopefully wasting their money is the worst thing that happens. They certainly won't be learning much if the AI does it for them. They also don't get the pride that comes from understanding. They are an imposter, and when someone asks if they made the thing, they will feel like an imposter. Nice job, noob!
I'm active in the world of amateur LED installations, and practically nobody realizes how easy it is to start a fire with a 500 watt power supply (or several of them connected together in bad ways) for their holiday lightshow. "AI" is not likely to help that and will probably make it worse.
"AI" is like the blind leading the blind, and it gives people permission to do the stupidest things. Sometimes it's right, but it's a gamble. It's not going to always give the same answer for the same question, and when it "hallucinates", a noob is unlikely to notice.
I’m puzzled why the post calls it “surprisingly good” when it’s so bad and missing basic requirements for different parts. I guess it’s surprising that anything at all was produced, but it’s weird that the author can’t identify the basic problems with the design.
This is similar to situations where someone uses an LLM to vibe code an app until it kind of works, but then an experienced developer takes one look at the codebase and can immediately see it was not developed with any understanding of the code.
That said, the AMS1117 datasheet shows a tantalum cap on the output. This is presumably because the non-negligible ESR helps stabilize the regulator, though they don't say that explicitly. The LM1117 datasheet explains this better, stating that "the ESR of the output capacitor should range between 0.3 Ω to 22 Ω". (These are very similar parts, just from different manufacturers.)
The ceramic caps chosen here are probably below that, so perhaps it would ring even with correct layout. The prompt guided towards that bad choice when it said all caps should be 0603, since almost all 0603 capacitors are ceramic. The LLM was free to choose a regulator optimized for use with ceramic output caps, but it probably chose the xx1117 because it's so common.
Maybe _then_ we can trust LLMs to design stuff for us.
Except that:
- no parts placement
- no routing
Easily the two hardest / annoying steps in designing such a straightforward board.Parts placement could be automated, but you’d have to tell something what you wanted and at that point might as well just do the placement instead of describing placement requirements.
Frameworks like atopile, tscircuit (disclaimer: I’m a tscircuit lead maintainer) and JITX are critical here because they enable the LLM to output the deep knowledge it already has. The author is missing a couple pieces to really get great output: 1) Context-friendly datasheets 2) DRC/Semantic review 3) LLM-compatible layout methods
The hardest to build is (3) and what I spend 90% of my time on. AI knows how do do spatial layout for things like flex or css grid but doesn’t have a layout method for PCBs. Our approach w/ tscircuit is to develop new layout systems that either match templates, new heuristic layouts (we are developing one called “pack”), or solve simple spatial constraints.
But tldr; it is only a matter of time before AI can output PCBs. It is not simple but we know what works with LLMs from witnessing the evolution of AI for website generation
Coincidentally, we just built an MCP server for atopile, and Claude seems to love it. It makes a big difference in usability, and also exposes our re-usable design library[0].
A bit about atopile[1]: Our core idea is to capture design intent in a knowledge graph with constraints and high-level modeling of components and interfaces. This lets us do much more than just AI integrations: we’ve built an in-house constraint solver that can automatically pick passives (resistors, capacitors, etc) based on the values you've constrained in your design.
Currently, atopile directly generates KiCAD PCB files, so you can finish the layout (mainly the connections between reusable layout blocks). We're also generating artifacts like I2C bus trees and 3D models, with power trees and schematic generation on the roadmap.
Happy to answer questions or go into technical details!
I mean, some people are claiming that LLMs can do scientific research, so the above isn't too much to ask.
Maybe this is pedantic, but I thought that the core point of "Vibe Coding" is that you do not look at the code. You "give in to the 'vibes'".
I don't know how to translate it into a physical hardware product exactly, but I think it would be manufacturing it without looking at it, plugging it in for your use-case and seeing if it works, then going back to the model, saying it didn't work, rinse, repeat.
Yet I have to say that if you are correct, the term is no different than eating tide pods or dry swallowing cinnamon. Why tf would anyone impose such an absurd artificial constraint on themselves, on the tool, or on whatever they are trying to build? Good faith question, I promise.
Constructing detailed prompts to ultimately pair program impressive, complex outcomes is what I assumed vibe coding was. After 35 years of not being able to tell a computer to write the code for me, even getting an 80% coherent first pass of a sophisticated refactor was already radical enough.
If that's what vibe coding is, then nobody should be using that term because it might be the perfect example of "just because you can, doesn't mean you should".
IDK! I don't think Vibe Coding, with the definition that I understand, is a good idea.
But the term comes from here: https://x.com/karpathy/status/1886192184808149383
And the key parts are:
> "forget that the code even exists"
> "I don't read the diffs anymore"
I myself am unclear on what the "vibes" that one is giving into actually are. But terms should have meanings and my understanding from reading the original tweet is that "Vibe Coding" means something distinct from "coding using some AI to help".
I appreciate the explanation. Off to get some cinnamon, I suppose.
It's the behaviorism of programming. (Pay no attention to the man behind the curtain).
Personally I use the term "agentic coding" if you are high leveling describing the specs to the LLM agent but still taking some minimal amount of time to review the diffs.
It's been my direct and many-times-repeated experience that o3 is an incredible electronics engineering wingman, so long as you follow good LLM hygiene; basically, verify all important assumptions, actually read the datasheets, err on the side of too much detail.
The time spent crafting prompts is the time I would spend planning and iterating on designs anyhow. Unlike a human, I don't have to pay them by the hour to patiently explain the nuances of different diodes or suggest alternative parts. o3 is remarkably good at rapidly grokking intent and making suggestions that have unblocked me.
For the camp of armchair quarterbacks on this site who demand specific "evidence" that we're not all just hallucinating the value of these tools, here are two things that happened just this week:
I was blowing my brains out troubleshooting a touch IC, IS31SE5117A. No matter how good my reflow or how many units I tried, I could not bring up an I2C connection. Based only on the fact that Cref refused to rise above ~0.1V when it's supposed to be about 0.7V, it suggested that it seemed likely that I had units from a batch that had no firmware. After going back and forth with their lead engineer for a week, I ordered a few IS32SE5117A - automotive/medical spec, same chip - and it worked immediately, prompting a product recall.
I'd managed to implement galvanic isolation on my USB connection to eliminate audio hum, but it turns out that touching a capacitive pad on a device that has no outside ground connection means that static has nowhere to go but to reboot the microcontroller. I'd been chasing my tail on this for a while, but o3 suggested that instead of isolating my whole device, I could just isolate my MIDI OUT circuit. This is one of those facepalm moments that only seems obvious in hindsight. I told my partner that abandoning weeks of effort was first very hard, and then very easy.
Finally, last night I had Cursor generate both sides of an SPI connection between two ESP32-S3s, something I had never done before. I obviously could have figured it out in 2020, but it would have taken me 1-2 weeks and it wouldn't be nearly as clean or cover as many edge cases.
My hottest take is that LLMs are already (far?) more valuable for engineering tasks than coding. That's kind of unfair because by definition, these tasks involve coding. The speed at which I've been able to iterate has been kind of nuts.
Also: any claims that people who tackle complex domains from a cold start somehow aren't learning fundamentals from a mentor with infinite patience and awareness of every part and circuit design pattern are simply wrong.
leakycap•4h ago
Having well-established, unambiguous rules that must be followed for functionality seems to be a key predictor of AI success. The more constrained and rule bound the domain, the better LLMs perform.