frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Boardsmith – text prompt to KiCad schematic, BOM, and firmware (works offline)

https://github.com/ForestHubAI/boardsmith
2•ForestHubAI•1h ago

Comments

ForestHubAI•1h ago
Hey HN,

I've been designing embedded hardware for years, and I kept running into the same problem: I'd wire up the same ESP32 + BME280 + OLED circuit for the fifth time, re-derive the same pull-up resistor values, forget decoupling caps, and spend 30 minutes on something that should take 3. So I built boardsmith.

*What it does:* You give it a text prompt like `"ESP32 with BME280 temperature sensor and SSD1306 OLED display"`, and it generates a complete KiCad 8 schematic (.kicad_sch), a BOM with JLCPCB part numbers, and working Arduino-compatible firmware. Not a template — an actual computed design with correct pull-up resistors, decoupling caps, I2C address assignments, and proper power distribution.

The pipeline has 9 stages: intent parsing, normalization, component selection, topology synthesis, HIR composition, constraint refinement, BOM building, KiCad export, and confidence scoring. There are 11 constraint checks (ERC compliance, voltage/current budgets, pin assignment validation, I2C address conflicts, decoupling requirements, etc.). The output passes KiCad's own ERC.

boardsmith also includes an agentic EDA layer: the ERCAgent automatically repairs ERC violations after schematic generation (bounded to 5 iterations with stall detection). `boardsmith modify` lets you patch existing schematics ("add battery management with TP4056") without touching the synthesis pipeline. And `boardsmith verify` runs 6 semantic verification tools against the design intent.

*The key thing:* `boardsmith build -p "your prompt" --no-llm` works fully offline. No API key, no network access, no cloud calls. It's deterministic — same prompt, same output, every time. The LLM mode is optional and just improves intent parsing for ambiguous prompts. The actual synthesis, constraint solving, and schematic generation are all computed, not generated by a language model.

*What it's good at:* ESP32 and RP2040 projects with sensors, displays, and actuators. I2C/SPI/UART topologies. Clean ERC. JLCPCB-ready Gerber output. 212 verified components with full electrical specs (not scraped datasheets — manually entered and cross-checked). 191 LCSC part mappings for direct JLCPCB SMT assembly.

*What it's not good at:* High-speed digital design (no impedance-controlled routing, no length matching). Analog circuit design (no op-amp topologies, no filter synthesis). STM32 support is in beta and has rough edges. No multi-board designs.

We're [ForestHub.ai](http://ForestHub.ai), a 4-person seed-funded team building tools for hardware engineers.

The CLI is AGPL-3.0 (commercial license available for companies that need it). Source is on GitHub.

Happy to answer questions about the architecture, the constraint solver, why we went AGPL, the ERCAgent repair loop, or how the HIR (Hardware Intermediate Representation) works as the contract between our synthesis and firmware tracks.

Marcus_FH•1h ago
btw. its free to use!
ForestHubAI•1h ago
exactly, a key feature I did not mention! lol
kone96•1h ago
Impressive pipeline description — but I'm curious about the boundary between "computed" and "LLM-generated." You mention the schematic generation is fully deterministic and the LLM only handles intent parsing. How exactly does that handoff work? Does the constraint solver operate purely on structured intermediate representation, or does the LLM ever influence component selection or topology decisions downstream? Asking because "not an AI wrapper" is a strong claim, and I'd love to understand the architecture well enough to verify it.
ForestHubAI•1h ago
Great question — this is the right thing to probe. Let me walk through the actual architecture.

TL;DR: The LLM is the front door (intent parsing) and an optional QA layer (verify/modify). Everything in between — component selection, topology, constraint solving, value computation, KiCad export — is deterministic code operating on a typed IR.

  The pipeline has 9 stages (B1–B9):                                                                                                              
                                                                                                                                                  
  B1 Intent Parsing → B2 Normalization → B3 Component Selection →                                                                                 
  B4 Topology Synthesis → B5 HIR Composition → B6 Constraint Refinement →
  B7 BOM Generation → B8 KiCad Export + ERC → B9 Confidence Scoring

  Where the LLM lives: Only B1 (Intent Parsing). It turns your natural language prompt into a structured intent — essentially "which MCU, which
  peripherals, which interfaces." That's it. From B2 onward, the LLM is not in the loop.

  The handoff is the HIR (Hardware Intermediate Representation) — a typed Pydantic v2 schema that acts as the contract between stages. Every stage
   reads HIR, transforms it, writes it back. Components, connections, voltages, constraints, provenance — all structured, all typed. The
  constraint solver (B6) operates purely on this IR. It doesn't call an LLM, it doesn't take text input. It runs 11 deterministic checks: voltage
  compatibility, I2C address conflicts, power budget, pull-up value computation, decoupling capacitor sizing, etc.

  Component selection (B3) and topology (B4) are also deterministic. They query a SQLite knowledge base of 212 verified components with FTS5
  search and range queries. Pull-up values, crystal load caps, level shifters — all computed from datasheet specs stored in the DB, not generated
  by an LLM.

  The easiest way to verify this yourself: pip install boardsmith (no [llm] extra) and run:

  boardsmith build -p "ESP32 with BME280 sensor" --no-llm

  This runs the full pipeline — schematic, BOM, firmware — with zero network calls, zero API keys, zero LLM involvement. Same input → same output,
   every time. The --no-llm mode isn't a degraded fallback; it's the proof that the synthesis engine is self-contained.

  Now, to be fully transparent: v0.2 does introduce an agentic layer on top — boardsmith modify (brownfield patching) and boardsmith verify
  (semantic verification) use LLM reasoning in a tool-use loop. But these are separate from the core synthesis pipeline. They're optional, and
  they operate on finished schematics, not within the generation path.

Disney+ Teases Creator-Driven Content as It Launches Vertical Video Feature

https://www.hollywoodreporter.com/business/digital/disney-creator-content-launches-vertical-video...
1•andsoitis•1m ago•0 comments

The FermAI Paradox: Agents Need Their IDE Moment

https://docs.ctx.rs/blog/the-fermai-paradox
1•ripped_britches•2m ago•0 comments

New F1 regulations take bravery out of the sport, drivers say

https://www.reuters.com/sports/formula1/new-f1-regulations-take-bravery-out-sport-drivers-say-202...
2•samizdis•5m ago•0 comments

Local Agents with Llama.cpp and Pi

https://huggingface.co/docs/hub/agents-local
2•kristianpaul•5m ago•0 comments

Show HN: Aurion OS – A 32-bit GUI operating system written from scratch in C

https://github.com/Luka12-dev/AurionOS
2•Luka12-dev•6m ago•0 comments

Ask HN: Rethinking SaaS architecture for AI-native systems

2•RobertSerber•6m ago•0 comments

Weak Cyberdefenses Threaten U.S. Tech Dominance

https://www.foreignaffairs.com/united-states/americas-endangered-ai
1•fheiding•7m ago•0 comments

Anthropic invests $100M into the Claude Partner Network

https://www.anthropic.com/news/claude-partner-network
2•surprisetalk•7m ago•0 comments

gstack – Garry Tan's Claude Code Setup

https://github.com/garrytan/gstack
1•jumploops•8m ago•0 comments

The Tao of Kung Fu: The Undiscerning Mind [video]

https://www.youtube.com/watch?v=Q5J4nHdr134
1•jamesgill•9m ago•0 comments

Is MacBook Neo "The One"? [video]

https://www.youtube.com/watch?v=AwuKCgSgcR4
2•tosh•10m ago•0 comments

WebZero – a web server that serves 5k req/SEC on a 2001 Pentium III

https://github.com/davitotty/webzero
2•Davitotty1•10m ago•1 comments

'The shine has been taken off': Dubai faces existential threat

https://www.theguardian.com/world/2026/mar/11/the-shine-has-been-taken-off-dubai-faces-existentia...
2•akbarnama•11m ago•0 comments

Speculative Branching Cache

https://medium.com/@dmitrijs.gavrilovs.swampus/speculative-branching-cache-managing-temporary-sta...
1•swampus•11m ago•0 comments

Valea: An AI-native systems programming language

https://github.com/hvoetsch/valea
1•hvoetsch•11m ago•1 comments

TrueTime Meetings – open-source video meetings, built for customization

https://www.red5.net/truetime/meetings/
1•mondainx•14m ago•0 comments

Mapping the Forests with Precision:Introducing Canopy Height Maps

https://ai.meta.com/blog/world-resources-institute-dino-canopy-height-maps-v2/?_fb_noscript=1
2•tzury•15m ago•0 comments

Axiom Raises $200M Series A at a $1.6B Valuation

https://menlovc.com/perspective/ai-will-write-all-the-code-mathematics-will-prove-it-works/
1•doppp•15m ago•0 comments

My PostgreSQL database got nuked lol

https://akselmo.dev/posts/they-broke-my-server/
1•birdculture•16m ago•0 comments

The Bitter Lesson Has No Utility Function

https://gfrm.in/posts/bitter-lesson-missing-half/index.html
1•slygent•16m ago•0 comments

Show HN: blunder.clinic, realistic daily chess puzzles

https://blunder.clinic/
2•mcyc•17m ago•0 comments

Show HN: Raccoon AI – Collaborative AI Agent for Anything

https://raccoonai.tech
3•scorchy38•19m ago•1 comments

When Weight-Loss Drugs Don't Work

https://www.nytimes.com/2026/03/12/well/weight-loss-drugs-response-wegovy-zepbound.html
4•paulpauper•20m ago•0 comments

The Met Introduces High-Definition 3D Scans of Art Historical Objects

https://www.thisiscolossal.com/2026/03/metropolitan-museum-of-art-3d-models-art-history/
1•paulpauper•20m ago•1 comments

Why Is the USDA Involved in Housing?

https://marginalrevolution.com/marginalrevolution/2026/03/why-is-the-usda-involved-in-housing.html
1•paulpauper•21m ago•0 comments

AI may never be as cheap to use as it is today

https://www.axios.com/2026/03/12/ai-models-costs-ipo-pricing
3•giuliomagnifico•21m ago•0 comments

C++26 Safety Features Won't Save You (and the Committee Knows It)

https://lucisqr.substack.com/p/c26-safety-features-wont-save-you
1•pjmlp•22m ago•0 comments

Moscow Reverts to 90s Communication Tools as Internet Outages Cause Chaos

https://united24media.com/latest-news/moscow-reverts-to-90s-communication-tools-as-internet-outag...
1•hkmaxpro•22m ago•0 comments

Pragmatic by design: Engineering AI for the real world

https://www.technologyreview.com/2026/03/12/1133675/pragmatic-by-design-engineering-ai-for-the-re...
1•joozio•23m ago•0 comments

Show HN: Ava – AI Voice Agent for Traditional Phone Systems(Python+Asterisk/ARI)

https://github.com/hkjarral/AVA-AI-Voice-Agent-for-Asterisk
2•hkjarral•23m ago•0 comments